This User Manual is for v.0.1.0 of the SDK. If using the newer v.0.2.b4, click here.

User Manual (v0.1.0)


Getting Started

Download the SpatialStories SDK

When your subscription has been approved, log in and go to this page to download the SDK. Unzip it to have access to a file with a standard .unitypackage extension.

Download Unity

Make sure that you have the recommended version of Unity installed for the SDK. SpatialStories has been successfully tested with Unity 5.6.2f1 which can be downloaded here. Although Unity 2017 may also be compatible with the SDK, we can’t guarantee it at this point.

Import the SDK in Unity

Open Unity and create a new project (or open an existing one). Go to Assets -> Import package -> Custom Package…, pickup the .unitypackage file you’ve just downloaded and click on Import. This will take a few minutes. Once imported in your project, you can erase the downloaded .unitypackage file from your hard drive.

Input Manager

In order to have access to the inputs of the different manufacturers’ controllers (HTC, Oculus Touch), you need to configure the InputManager. There are 2 ways to do it:

  1. Right click anywhere in the Project Tab and select Repair Input Manager.
  2. Once you have converted the camera into an Interactive Camera (IO), when you hit play, the system will check if you have the necessary configuration in order to use the input system. If you don’t have the proper inputs setup, you will be asked to configure it. Then simply follow the instructions.

Note that your original InputManager file will be kept as a backup in the same location!



The Camera

SpatialStories uses its own camera with the following functionalities:

  • Connector.

    Hands Tracking

    Allows use of your hands

  • Connector.

    Teleportation

    Allows to teleport in the virtual world

  • Connector.

    Player's Torso

    Allows to differenciate your head from your torso

To benefit from the SpatialStories Camera, all you have to do is convert the standard Unity Camera into an Interactive Camera. Right-click on the Camera in the Hierarchy Tab and select SpatialStories -> Convert into Interactive Camera.

Once converted, your new camera will have the following structure:

The Camera (IO) gives you access to the Gaze_InputManager component (in the Inspector).

Left/Right Hand Active: defines which hand(s) to use in your experience.

Track Position/Orientation: defines if you want to track your hands’ position and/or orientation (it may be useful to deactivate these options when using a controller with no positional tracking abilities).

Haptic Audio Clip Min/Max: defines which audio clip to use for haptic feedbacks on controllers who support this functionality.

On the Head (IO) node, you have the standard Interactive Object structure with with the addition of the Player Torso node.

The Torso represents the approximate position of the torso below the head.

You can change the distance between the head and the torso in the inspector.

Represents the left hand of the player.
Here you have access to the Gaze_Teleporter script where you can tweak the teleport settings.
Represents the right hand of the player.
Here you have access to the Gaze_Teleporter script where you can tweak the teleport settings.



Teleport

To move around in his virtual environment, the user has the ability to teleport. This is especially useful if your experience contains large spaces or if you are using a device with no positional tracking.

By default, the Teleport button is the joystick of the Oculus Touch controller. You first need to move the joystick in a direction for the Teleport Beam to show up. When teleporting is possible, the Beam is blue and you can see a circular target at the end of it. Simply release the joystick to teleport to that target.

To set up your zones of Teleport (and avoid to be able to teleport in every collider), you can create a Teleport Layer and select it in the Gaze_Teleporter Settings (see Details section below). To allow Teleport on a surface, your location’s collider has to be on the Teleport Layer.

The Teleport Settings are located in the Interactive Camera (Camera IO) rig, more precisely in Left Hand (IO) and Right Hand (IO). If you want, you can choose a specific set of Teleport parameters for each hand.

Camera (IO) -> Left Hand (IO) -> Gaze_Teleporter
Camera (IO) -> Right Hand (IO) -> Gaze_Teleporter

When the Teleport Beam is blue, you can teleport at destination. Color can be customized with the ‘Good Spot Col‘ property.

When the Teleport Beam is red, the teleport is not allowed. Color can be customized with the ‘Bad Spot Col‘ property.

  • Gyro Prefab : mandatory ! the tip of the arc for the target destination (circle)
  • Orient On Teleport : allow the user to orient on teleport by rotating the joystick. The rotating cone shape inside the circle represents the direction the user will look once teleported.
  • Input Threshold : below this value, the input won’t be taken into account to avoid false inputs
  • Teleport Layer : the layers the teleport is allowed on
  • Transition : [not used, will be removed]
  • Fade Mat : [not used, will be removed]
  • Fade Duration : [not used, will be removed]
  • Gravity : to change the shape of the arc
  • Initial Vel Magnitude : [not used, will be removed]
  • Time Step : [not used, will be removed]
  • Max Distance : to limit the teleport distance [meters]
  • Dash Speed : [not used, will be removed]
  • Teleport Cooldown : cooldown between two teleport actions
  • Mat Scale : to scale the material on the arc
  • Tex Movement Speed : to animate the arc texture
  • Good Spot Col : defines the color when teleport is allowed
  • Bad Spot Col : defines the color when teleport is NOT allowed
  • Gyro Color : defines the color of the teleport gyro (at the tip of the arc)
  • Gyro UI : defines the look of the tip of the arc
  • Arc Line Width : sets the width of the arc
  • Arc Material : defines the material of the arc


Create your first Interactive Object (IO)

SpatialStories uses what we call ‘Interactive Objects’ shortened hereafter IO(s).

IOs are any objects you want to add life and interactions to. For instance, you can create a simple cube from within Unity and convert it to an IO or you can import your model in Unity from your favorite 3D software and convert it to an IO. You can also use 2D assets or anything you want. As soon as an object has been converted into an IO, you will be able to grab it, for example.

Converting an object into an IO

To convert your object into an IO, there are two options:

    • Right click on your object in the Hierarchy view and chose: SpatialStories -> Convert into an interactive object
    • Select your object in the hierarchy or in the scene view and in the menu bar select SpatialStories -> Convert -> Into an interactive object

IO’s Timeframe

An Interactive Object has several lifecycles also called timeframe described below :

Only during its ACTIVE phase, the conditions are checked and an interaction can occur.



Interactive Object (Details)

  • List of user’s defined interactions.
  • You can add as many interactions as needed for each IO.
  • Each interaction is composed of Conditions and Actions.
  • The actions will be executed once its related conditions are satisfied.
  • If no conditions are specified for an interaction, it will be considered satisfied by default thus executing its actions immediately.

  • List of user’s defined visuals assets defining the visual aspect of this IO.
  • Visuals can be anything 2D, 3D…
  • You can also leave this empty (keep the Visuals node and remove anything inside it) to use this IO as a trigger zone like a checkpoint or an invisible button, etc…

  • Proximity defines the zone around the IO where collisions can occur.
  • Proximity will be used to detect when an object is in proximity with another object (including the player).
  • Each IO has a proximity.
  • It’s up to the user to define its shape and size. By default, we provide a simple box but you can replace it with any shape as long as ‘Is Trigger’ is checked.

  • Reacts to the gaze of the user (camera forward direction).
  • It’s up to the user to define it’s shape and size. By default, we provide a simple cube but you can replace it with any shape as long as it ‘Is Trigger’ is checked.

  • This zone defines where the IO will react to hand hovering.
  • It’s up to the user to define its shape and size. By default, we provide a simple box collider but you can replace it with any shape as long as ‘Is Trigger’ is checked.

  • This zone defines where the IO can be taken with Grab ability.
  • It’s up to the user to define its shape and size. By default, we provide a simple box collider but you can replace it with any shape as long as ‘Is Trigger’ is checked. In general this zone matches the shape of the object.

  • This transform object defines the position and rotation the IO will translate into when grabbed by the left hand.
  • It’s up to the user to define it’s position and rotation.

  • This transform object defines the position and rotation the IO will translate into when grabbed by the right hand.
  • It’s up to the user to define it’s position and rotation.

  • This is where you add your audio sources that will be used to play your audio clips.



Add Interactions to your IO(s)

An Interaction is defined by two elements :

  1. A set of conditions
  2. A set of actions

When all conditions are met, the actions are triggered.

All interactions are defined in the Interactions node of the structure of an IO. Let’s see how it works !
Default Interaction
In the Hierarchy view, if you select the default interaction (the unique child of Interactions) you will see an interaction script attached to it in the inspector. Here you have two checkboxes : one for setting conditions called Conditions and one for actions called Actions.
Interaction script
The principle is simple : for any interaction, once its conditions are validated, the defined associated actions will take place (if you don’t define conditions but only actions, they will occur instantly).



Conditions

  • Valid if the specified object is gazed by the camera (player)

  • Valid if the specified list of objects are in proximity (it means that the Proximity collider of each object is in collision with each other).
  • OnEnter / OnExit lets you define whether you want the condition to validate when specified IOs are entering or exiting their proximity zone.
  • If ‘Require All’ is checked, all the specified objects in this list have to be in proximity with each other. If not checked, as soon as two of them are in proximity, this condition will be validated.

For this option, you have 3 fields to specify :

  1. The hand you want to take into account (LEFT, RIGHT, BOTH)
  2. The action (TOUCH, UNTOUCH)
  3. The IO

For instance, you can say that the LEFT hand should TOUCH the Button (IO).

Or BOTH hands should UNTOUCH the Door (IO).

For this option, you have 3 fields to specify :

  1. The hand you want to take into account (LEFT, RIGHT, BOTH)
  2. The action (GRAB, UNGRAB)
  3. The IO

For instance, you can say that the LEFT hand should GRAB the Key.

Or BOTH hands should UNGRAB the Lever.

  • Let’s you define what button/axis will validate the conditions.
  • If ‘Require All’ is checked, all the specified inputs in this list have to be pressed. Otherwise, as soon as two of them are in used, this condition will be validated.

  • The condition will be validated once a teleport has occured.
  • You can decide on which teleport phase the condition will be validated.
  • There are 5 distinct phases when the user uses the teleport system.
    • TELEPORT : when the user has been teleported
    • ACTIVATED : when the user starts the process of teleport (is choosing where to teleport)
    • DEACTIVATED : when the user stops the teleport process (release the teleport button without teleporting)
    • BAD_DESTINATION : when the user tries to teleport on a bad destination.
    • GOOD_DESTINATION : when the user tries to teleport on a good destination.

Let’s you define a dependency that will validate the condition.

Each Interactive Object has 3 distincts phases :

  • Active : when an IO finishes its pending phase based on its delay and its dependencies
  • Triggered : when an IO is triggered
  • After : when an IO reaches its expires time if specified

With this option, you can setup a condition to be validated based upon the validation of another condition.

It is useful to create chain reactions between IOs.

If ‘Require All’ is checked, all the specified inputs in this list have to be pressed. Otherwise, as soon as two of them are in used, this condition will be validated.

If you check Custom Condition, you will have to add your own script component on this interaction.
Your c# script must inherit from Gaze_AbstractConditions.
Useful when none of the given conditions meet your needs and you want to create your own custom condition.
In your script, where you define the conditions, you’ll have to call the ValidateCustomCondition(true) method in order to validate your custom condition.
Each Interactive Object has 3 distincts phases :

  • Active : when an IO finishes its pending phase based on its delay and its dependencies
  • Triggered : when an IO is triggered
  • After : when an IO reaches its expires time if specified

Represents the amount of time in seconds this interaction will have to wait before allowing the user to validate the conditions.

i.e. by setting a value of 10 means the user will have to wait 10 seconds before he can try to satisfy the specified conditions like Gaze for instance.

Represents the amount of time in seconds this interaction can be triggered once it’s in the active phase (after its delay and dependencies validation).

i.e. by setting a value of 10 means the conditions of this interaction can be validated during 10 seconds. After this, this conditions can’t be satisfied anymore.

Auto Trigger is useful when you want an action to occur without taking conditions into account.

  • If set to NONE, it will be deactivated. All the specified conditions have to be met.
  • If set to START, no matter which other conditions are specified, this interaction will be triggered immediately.
  • If set to END, at the end of the Expires, the conditions will be automatically validated even if the other conditions are not met (useful to avoid deadlocks in the continuation of an experience).

If you want your interaction to be triggered more than one time, you can use this condition to reload the interaction.

  • Modes :
    • MANUAL ONLY : the reload has to be called through your own C# script
    • INFINITE : this interaction can be reloaded an infinite amount of time during its active phase
    • FINITE : this interaction can be reloaded a limited amount of time during its active phase

Example : if you have a button that opens a door, you want that interaction to happen every time the user presses the button. Let’s say you create an interaction called ButtonOpensDoor and check Reload with mode INFINITE.



Actions

Activates or Deactivates the visuals of this IO.
Activates / deactivates the colliders associated with this IO.
Modifies the grab ability. Used to add or remove the grab ability to an object.
Modifies the grab distance at which the user can grab this IO.
Modifies the grab mode. Choose between ATTRACT or LEVITATE mode.
Used to add the touch ability to this IO.
Used to modify the maximum distance this IO can be touched at.
Used to modify the gravity on an IO.
Used to play an audio.
Used to play an animation.
Used to destroy this IO.



Drag and Drop

Note : With this first release, you have the ability to use the Drag and Drop functionnality. The process is quite simple but will be greatly simplified for the next releases of the SDK.

Drag and Drop is the ability to set a target place for an object to go to (in this example, the hat is dragged and dropped on the character’s head).

Set up Drag and Drop

Follow the steps below to setup a Drag and Drop interaction:

  1. Create an IO for your Drop object (see the Create your first Interactive Object (IO) section to know how to do it);
  2. Configure the Drop target where you drop the object (see explanations below);
  3. Configure the Drop object (see explanations below).

The fastest way for creating a drop target is to duplicate your drop object IO, add  the word Target to its name (easier to identify) and move it aside from your object to dropLike that, the visuals will be the same. Then follow the instructions below :

  • Assign our Ghost material on your drop target object to easily identify it from your drop object (optional)
  • On the Gaze_InteractiveObject (at the top level of your IO) component, set the Manipulation Modes to NONE
  • At the root level of your IO, on the Rigidbody component, set Use Gravity to false
  • At the root level of your IO, on the Rigidbody component, set Is Kinematic to true
  • Set your Visuals’ colliders (if you have any) to Is Trigger (to avoid collisions with your drop object)

 

Select your drop object IO in the scene then :

  • Make sure the Gaze_DragAndDropManager is enabled
  • In the Gaze_DragAndDropManager, assign the Current Drag And Drop Target field with the interaction in your drop object
  • In the Gaze_DragAndDropManager, define the conditions you want to allow the drop (detailed in the Conditions section below)
  • In the Gaze_DragAndDropCondition, in an Interaction of your IO, assign the Target Object with the drop target IO

At the top level of an Interactive Object, you have a Gaze_DragAndDropManager component. Here you can define the conditions to allow a drop as follow :

Minimum Distance

  • the distance between the drop object and the drop target. If not within this distance, the drop is not allowed.

Respect X/Y/Z Axis (and mirrored)

  • if you chose to constrain the drop to the target’s rotation, here you can specify on which axis are mandatory in order to allow the drop. I.E. if you check ‘Respect X Axis’, the drop object must have the same orientation on its X axis than the drop target (with the specified threshold).

Angle Threshold

  • represents the precision at wich the orientation of the drop object on the drop target should be as a percentage. I.E. if the threshold is set to 100 with Respect X Axis activated, that means that the drop object’s rotation on the x axis should be exactly the same as the x axis orientation of the drop target.

Time to Snap

  • used to set the duration of the transition from where you release the object to its drop target location.

Current Drag And Drop Target

  • the interaction of the IO on the drop target.



Audio

Activate: activate an audio on some condition.

    • Audio Source: the audio source used to play the audio
    • Immediate play: if enable, the audio will stop and start again when trigger
    • Cumulate audios: (only availabel if immediate play is disable): if enable, multiple audios can be played at the same time from the current interaction
      • Max concurrent audios: the maximum audio that can be played at the same time from the current interaction
    • Randomize pitch: if enable, the pitch is randomly chosen between minPitch and maxPitch at each time the audios is played

Audio Section

 

    • Loop: looping option
      • None: looping is disable.
      • Single: loop on a single audio
      • Playlist: loop on the full playlist (only useful when there are multiple audio specified).
        • Fade in between: if enable, the audios will fade in between each different audios in the playlist
    • Sequence: Only available when there are multiple audios in the playlist. Specify the sequence in which the audios are played.
      • In Order: the playlist is play in order specified in the UI
      • Random: the playlist is played randomly
    • Fade In: enable fade in. The fade duration can be specified. The fade curve can also be personnalised
    • Fade Out: enable fade out at the end of the audio. The fade duration can be specified. The fade curve can also be personnalised

Deactivate:  Stop All audio comming from audiosource.

    • Audio Source: the audio source that need to be stopped
    • Fade Out: enable fade out when deactivating an audioSource. The fade duration can be specified. The fade curve can also be personnalised

 

Fade