User Manuals

The SpatialStories SDK is currently available in two versions: one for Virtual Reality (VR), the other for Augmented Reality (AR). Check out each User Manual by clicking on these buttons:


Getting Started

Download SpatialStories

When your subscription has been approved, log in and go to this page to download the SDK. Unzip it to have access to a file with a standard .unitypackage extension.

Download Unity

Make sure that you have the recommended version of Unity installed for the SDK. SpatialStories has been successfully tested with Unity 5.6.2f1 which can be downloaded here (select the ‘Unity 5.x’ tab). Unity 2017 is also fully supported.

Import the SDK in Unity

Open Unity and create a new project (or open an existing one). Go to Assets -> Import package -> Custom Package…, pickup the .unitypackage file you’ve just downloaded and click on Import. This will take a few minutes. Once imported in your project, you can erase the downloaded .unitypackage file from your hard drive.

Input Manager

In order to have access to the inputs of the different manufacturers’ controllers (HTC, Oculus Touch), you need to configure the InputManager. There are 2 ways to do it:

  1. Right click anywhere in the Project Tab and select Repair Input Manager.
  2. Once you have converted the camera into an Interactive Camera (IO), when you hit play, the system will check if you have the necessary configuration in order to use the input system. If you don’t have the proper inputs setup, you will be asked to configure it. Then simply follow the instructions.

Note that your original InputManager file will be kept as a backup in the same location!



The Camera

SpatialStories uses its own camera with the following functionalities:

  • Connector.

    Hands Tracking

    Allows use of your hands

  • Connector.

    Teleportation

    Allows to teleport in the virtual world

  • Connector.

    Player's Torso

    Allows to differenciate your head from your torso

To benefit from the SpatialStories Camera, all you have to do is convert the standard Unity Camera into an Interactive Camera. Right-click on the Camera in the Hierarchy Tab and select SpatialStories -> Convert into Interactive Camera.

Once converted, your new camera will have the following structure:

The Camera (IO) give you access to the Input Manager.

Input Manager

Left/Right Hand Active: defines which hand(s) to use in your experience. (Useful when the platform has only 1 controller like the GearVr controller or Daydream in those cases you can leave 1 hand active or even deactivate both if the platform has no controllers)

Track Position/Orientation:
defines if you want to track your hands’ position and/or orientation (it may be useful to deactivate these options to simulate a controller with no positional tracking abilities like the GearVR controller and Daydream controller ).

Haptic Audio Clip Min/Max: defines which audio clip to use for haptic feedbacks on controllers who support this functionality.

On the Head (IO) gameobject, you have the standard Interactive Object structure with the addition of the Player Torso object. The Torso represents the approximate position of the torso below the head.

You can change the distance between the head and the torso in the inspector.

Represents the left hand of the player. Here you have access to the Teleportation and levitation options where you can tweak the settings for the left hand.
Represents the righthand of the player. Here you have access to the Teleportation and levitation options where you can tweak the settings for the right hand.


Teleport

To move around in his virtual environment, the user has the ability to teleport. This is especially useful if your experience contains large spaces or if you are using a device with no positional tracking.
By default, the Teleport button is the joystick of the Oculus Touch controller. You first need to move the joystick in a direction for the Teleport Beam to show up. When teleporting is possible, the Beam is blue and you can see a circular target at the end of it. Simply release the joystick to teleport to that target.

To set up your zones of Teleport (and avoid to be able to teleport in every collider), you can create a Teleport Layer and select it in the Teleporter Settings (see Details section below). Then set your Teleport game object to be on the Teleport Layer.

The Teleport Settings are located in the Interactive Camera (Camera IO) rig, more precisely in Left Hand (IO) and Right Hand (IO). If you want, you can choose a specific set of Teleport parameters for each hands and change its Design.

When the Teleport Beam is blue, you can teleport at destination. Color can be customized with the ‘Good Spot Col‘ property.

When the Teleport Beam is red, the teleport is not allowed. Color can be customized with the ‘Bad Spot Col‘ property.

  • Gyro Prefab : mandatory ! the tip of the arc for the target destination (circle)
  • Orient On Teleport : allow the user to orient on teleport by rotating the joystick. The rotating cone shape inside the circle represents the direction the user will look once teleported.
  • Input Threshold : below this value, the input won’t be taken into account to avoid false inputs
  • Teleport Layer : the layers the teleport is allowed on
  • Transition : [not used, will be removed]
  • Fade Mat : [not used, will be removed]
  • Fade Duration : [not used, will be removed]
  • Gravity : to change the shape of the arc
  • Initial Vel Magnitude : [not used, will be removed]
  • Time Step : [not used, will be removed]
  • Max Distance : to limit the teleport distance [meters]
  • Dash Speed : [not used, will be removed]
  • Teleport Cooldown : cooldown between two teleport actions
  • Mat Scale : to scale the material on the arc
  • Tex Movement Speed : to animate the arc texture
  • Good Spot Col : defines the color when teleport is allowed
  • Bad Spot Col : defines the color when teleport is NOT allowed
  • Gyro Color : defines the color of the teleport gyro (at the tip of the arc)
  • Gyro UI : defines the look of the tip of the arc
  • Arc Line Width : sets the width of the arc
  • Arc Material : defines the material of the arc


Create your first Interactive Object (IO)

SpatialStories uses what we call ‘Interactive Objects’ shortened hereafter IO(s).

IOs are any objects you want to add life and interactions to. For instance, you can create a simple cube from within Unity and convert it to an IO or you can import your model in Unity from your favorite 3D software and convert it to an IO. You can also use 2D assets or anything you want. As soon as an object has been converted into an IO, you will be able to grab it, for example.

Converting an object into an IO

To convert your object into an IO, there are two options:

    • Right click on your object in the Hierarchy view and chose: SpatialStories -> Convert into an interactive object
    • Select your object in the hierarchy or in the scene view and in the menu bar select SpatialStories -> Convert -> Into an interactive object. WARNING: Interactive Objects come without a physics collider so you can choose if you want to use physics or not. You must add it by hand to the visuals of your object. If you don’t want the object to react to gravity, you can uncheck the gravity toggle of the rigid body.

Define Interactive Object Manipulation Mode

Each Interactive Object has three manipulation options which you will find at the root of each (IO):


None: Object is static and can’t be manipulated
Grab: Object can be grabbed by the user. You can define here the Grab Distance, Attraction Speed and if the object will snap when grabbed. If the Snap on Grab option is checked, the object uses the right and left Snap Handles to position itself.
Levitate: The object uses what we call levitation to be moved around. This is particularly useful for mobile VR experience where there is less freedom of movement. You can define here the levitation distance.
Touch: Touch is the ability for an object to be triggered at distance when pressing the trigger and pointing at it.



Interactive Object (Structure)

  • List interactions defined by the user.
  • You can add as many interactions as needed for each Interactive Object.
  • Each interaction is a made of Conditions and Actions.
  • The actions will be executed once its conditions are satisfied.
  • If no conditions are specified for an interaction, it will be considered satisfied by default thus executing its actions immediately.
  • List of user’s defined visuals assets defining the visual aspect of this IO.
  • Visuals can be anything 2D, 3D…
  • You can also leave this empty (keep the Visuals node and remove anything inside it) to use this IO as a trigger zone like a checkpoint or an invisible button, etc…
  • Proximity defines the zone around the IO where collisions can occur.
  • Proximity will be used to detect when an object is in proximity with another object (including the player).
  • Each IO has a proximity.
  • It’s up to the user to define its shape and size. By default, we provide a simple box collider but you can replace it with any collider type as long as ‘Is Trigger’ is checked.

  • Reacts to the gaze of the user (camera forward direction).
  • It’s up to the user to define it’s shape and size. By default, we provide a simple cube but you can replace it with any shape as long as it ‘Is Trigger’ is checked.


  • This zone defines where the IO will react to hand hovering.
  • It’s up to the user to define its shape and size. By default, we provide a simple box collider but you can replace it with any shape as long as ‘Is Trigger’ is checked.


  • This zone defines where the IO can be taken with Grab ability.
  • It’s up to the user to define its shape and size. By default, we provide a simple box collider but you can replace it with any shape as long as ‘Is Trigger’ is checked. In general this zone matches the shape of the object.


  • This transform object defines the position and rotation the IO will translate into when grabbed by the left hand.
  • It’s up to the user to define it’s position and rotation.


  • This transform object defines the position and rotation the IO will translate into when grabbed by the right hand.
  • It’s up to the user to define it’s position and rotation.


  • This is where you add your audio sources that will be used to play your audio clips.


Add Interactions to your IO(s)

An Interaction is defined by two elements :

  1. A set of conditions
  2. A set of actions

When all conditions are met, the actions are triggered.

All interactions are defined in the Interactions object of the structure of an IO. Let’s see how it works !

In the Hierarchy view, if you select the default interaction you will see an interaction window on the inspector. Here you have two checkboxes : one for setting Conditions and one for setting up Actions. The principle is simple : for any interaction, once its conditions are validated, the defined associated actions will take place (if you don’t define conditions but only actions, they will occur instantly).



Condition Parameters

The following list gives you an overview of all the conditions available for you to trigger an action. You can define one or several.

    Valid if the specified IO is:

  • Gazed by the camera (player) if set to IN
  • Ungazed by the camera (player) if set to OUT
    For this option, 3 fields needs to be specified :

  1. The hand you want to take into account (LEFT, RIGHT, BOTH)
  2. The IO to point with the controller
  3. The action (IN for pointing the object, OUT for unpointing the object)
  4. (For instance: LEFT hand should point at the Alien (IO))

  • Valid if the specified list of objects are in proximity (it means that the Proximity collider of each object is in collision with each other).
  • OnEnter / OnExit lets you define whether you want the condition to validate when specified IOs are entering or exiting their proximity zone.
  • If ‘Require All’ is checked, all the specified objects in this list have to be in proximity with each other. If not checked, as soon as two of them are in proximity, this condition will be validated.
For this option, you have 3 fields to specify :

  1. The hand you want to take into account (LEFT, RIGHT, BOTH)
  2. The action (TOUCH, UNTOUCH)
  3. The IO

For instance, you can say that the LEFT hand should TOUCH the Button (IO).

Or BOTH hands should UNTOUCH the Door (IO).

For this option, you have 3 fields to specify :

  1. The hand you want to take into account (LEFT, RIGHT, BOTH)
  2. The action (GRAB, UNGRAB)
  3. The IO

For instance, you can say that the LEFT hand should GRAB the Key.

Or BOTH hands should UNGRAB the Lever.

  • Let’s you define what button/axis will validate the conditions.
  • If ‘Require All’ is checked, all the specified inputs in this list have to be pressed. Otherwise, as soon as two of them are in used, this condition will be validated.
  • The condition will be validated once a teleport has occured.
  • You can decide on which teleport phase the condition will be validated.
  • There are 5 distinct phases when the user uses the teleport system.
    • TELEPORT : when the user has been teleported
    • ACTIVATED : when the user starts the process of teleport (is choosing where to teleport)
    • DEACTIVATED : when the user stops the teleport process (release the teleport button without teleporting)
    • BAD_DESTINATION : when the user tries to teleport on a bad destination.
    • GOOD_DESTINATION : when the user tries to teleport on a good destination.
Allows you to define if this interaction is dependant on any other in the scene. For details click here.
If you check Custom Condition, you will have to add your own script component on this interaction.
Your c# script must inherit from Gaze_AbstractConditions.
Useful when none of the given conditions meet your needs and you want to create your own custom condition.
In your script, where you define the conditions, you’ll have to call the ValidateCustomCondition(true) method in order to validate your custom condition.
An Interactive Object has several lifecycles also called timeframe described below:

  • Active : when an IO finishes its pending phase based on its delay and its dependencies
  • Triggered : when an IO is triggered
  • After : when an IO reaches its expires time if specified

Only during its ACTIVE phase, the conditions are checked and an interaction can occur.

Represents the amount of time in seconds this interaction will have to wait before allowing the user to validate the conditions.

i.e. by setting a value of 10 means the user will have to wait 10 seconds before he can try to satisfy the specified conditions like Gaze for instance.

Represents the amount of time in seconds this interaction can be triggered once it’s in the active phase (after its delay and dependencies validation).

i.e. by setting a value of 10 means the conditions of this interaction can be validated during 10 seconds. After this, this conditions can’t be satisfied anymore.

Auto Trigger is useful when you want an action to occur without taking conditions into account.

  • If set to NONE, it will be deactivated. All the specified conditions have to be met.
  • If set to START, no matter which other conditions are specified, this interaction will be triggered immediately.
  • If set to END, at the end of the Expires, the conditions will be automatically validated even if the other conditions are not met (useful to avoid deadlocks in the continuation of an experience).
If you want your interaction to be triggered more than one time, you can use this condition to reload the interaction.

  • Modes :
    • MANUAL ONLY : the reload has to be called through your own C# script
    • INFINITE : this interaction can be reloaded an infinite amount of time during its active phase
    • FINITE : this interaction can be reloaded a limited amount of time during its active phase

Example : if you have a button that opens a door, you want that interaction to happen every time the user presses the button. Let’s say you create an interaction called ButtonOpensDoor and check Reload with mode INFINITE.



Creating Chain Reactions using Dependencies

Lets you define a dependency that will validate the condition. This feature of our Unity Plugin is extremely useful. It will allow you to build chain of events for your user. Dependencies allow you to define which interaction will be triggered based on what the user is doing. Check-out the simple dependency demo scene to get a quick idea of how this can be used.

Wether it is to build an interactive story, game mechanics, tutorials or simulations, dependencies are a practical and simple feature. To see it in action you can also download our latest game Break a Leg on the GearVR which uses dependencies to drive all the chain of events of the adventure game.

Each Interactive Object has 3 distincts phases:

  • Active: when an IO finishes its pending phase based on its delay and its dependencies
  • Triggered: when an IO is triggered
  • After: when an IO reaches its expires time if specified

With this option, you can setup a condition to be validated based upon the validation of another Interactive Object..

If ‘Require All’ is checked, all the interactions in the list must be validated before the action is triggered if there are more than two.



Actions

Activates or Deactivates the visuals of this IO.
Activates / deactivates the colliders associated with this IO.
Modifies the grab ability. Used to add or remove the grab ability to an object.
Modifies the grab distance at which the user can grab this IO.
Modifies the grab mode. Choose between ATTRACT or LEVITATE mode.
Used to add the touch ability to this IO.
Used to modify the maximum distance this IO can be touched at.
Used to modify the gravity on an IO.
Used to play an audio.
Used to play an animation.
Used to destroy this IO.



Animations (NEW)

There are three types of animations that can be triggered through an action:

1/Using Mecanim: You will automatically see the list of triggers that have been setup in the Animator Window of the Interactive Object. The animator can either be at the root of the Interactive Object or directly on the visuals if it comes from a 3D model in maya.

Warning
: Sometimes the unity editor does not update the trigger list properly, just press play once for them to appear in your IO action window.

2/ Using Animation Clips : When you don’t need to setup mecanim to trigger an animation you can simply drop the animation clip to play it. (THIS FEATURE IS UNDER CONTRUCTION. IT HAS BUGS THAT NEED TO BE RESOLVED)



Drag and Drop (NEW)

Note : With this second release, you have the ability to use a simplified Drag and Drop functionality.

Drag and Drop is the ability to set a target place for an object to go into (in this example, the hat is dragged and dropped on the character’s head). This feature is extremely useful to define snapping zones for gaming, educational or healthcare experiences if you want you user to place an element at a specific spot.

Set up Drag and Drop

Follow the steps below to setup a Drag and Drop interaction:

  1. Create an IO for your Drop object (see the Create your first Interactive Object (IO) section to know how to do it);
  2. Click on Enable Drag And Drop on the root of the Drop Object
  3. Generate the Drop target and place it where you drop the object (see explanations below)
  4. Configure the Drop object (see explanations below).
  • The fastest way for creating a drop target is to generate it from the Gaze_InteractiveObject component.
  • Once the “Enable Drag And Drop” is set to true you will be able to see a field called “generate targets“.
  • Just enter the number of targets you want to generate and hit the “GO” button.
  • Your target is automatically generated and all setup, you just have to place it where you want to drop the object.
  • Make sure your drop object IO is manipulable in order to be able to drag it.
    (The manipulation has to be set to GRAB or LEVITATE)
  • At the top level of an Interactive Object, you have a Gaze_InteractiveObject component. Here, if “Enable Drag And Drop” is set to true, you can define the parameters to allow a drop as follow :
    • Minimum Distance
      the distance between the drop object and the drop target. If not within this distance, the drop is not allowed.
    • Respect X/Y/Z Axis (and mirrored)
      if you chose to constrain the drop to the target’s rotation, here you can specify on which axis are mandatory in order to allow the drop. I.E. if you check ‘Respect X Axis’, the drop object must have the same orientation on its X axis than the drop target (with the specified threshold).
    • Angle Threshold
      represents the precision at wich the orientation of the drop object on the drop target should be as a percentage. I.E. if the threshold is set to 100 with Respect X Axis activated, that means that the drop object’s rotation on the x axis should be exactly the same as the x axis orientation of the drop target.
    • Time to Snap
      used to set the duration of the transition from where you release the object to its drop target location.
  • 2 things need to be specified:
    1. The moment where the condition is validated (when the drop object is dropped in the target, when it is removed from the target…)
    2. The target(s) you want to take in account to validate this condition (it can be any of its targets or you can specify a particular set of targets)


Audio

Activate: activate an audio on some condition.

    • Audio Source: the audio source used to play the audio
    • Immediate play: if enable, the audio will stop and start again when trigger
    • Cumulate audios: (only availabel if immediate play is disable): if enable, multiple audios can be played at the same time from the current interaction
      • Max concurrent audios: the maximum audio that can be played at the same time from the current interaction
    • Randomize pitch: if enable, the pitch is randomly chosen between minPitch and maxPitch at each time the audios is played
    • Loop: looping option
      • None: looping is disable.
      • Single: loop on a single audio
      • Playlist: loop on the full playlist (only useful when there are multiple audio specified).
        • Fade in between: if enable, the audios will fade in between each different audios in the playlist
    • Sequence: Only available when there are multiple audios in the playlist. Specify the sequence in which the audios are played.
      • In Order: the playlist is play in order specified in the UI
      • Random: the playlist is played randomly
    • Fade In: enable fade in. The fade duration can be specified. The fade curve can also be personnalised
    • Fade Out: enable fade out at the end of the audio. The fade duration can be specified. The fade curve can also be personnalised

Deactivate:  Stop All audio comming from audiosource.

    • Audio Source: the audio source that need to be stopped
    • Fade Out: enable fade out when deactivating an audioSource. The fade duration can be specified. The fade curve can also be personnalised