The simulator is open-source and free to use. It is aimed for, but not limited to, academic research. We welcome forking of this repository, pull requests, and any contributions in the spirit of open science and open-source code 😍😄 For enquiries about collaboration, you may contact [email protected].
If you use coupled-sim for academic work please cite the following paper.
Bazilinskyy, P., Kooijman, L., Dodou, D., & De Winter, J. C. F. (2020). Coupled simulator for research on the interaction between pedestrians and (automated) vehicles. 19th Driving Simulation Conference (DSC). Antibes, France.
📺 These days, a video is worth more than a million words. The image below points to a youtube video of the recording of a demo of the simulator with 3 agents:
The coupled simulator supports both day and night-time settings. Figure above shows a view of the night mode. Figure below shows the top view of the environment. It is a model of a city centre containing:
- Network of 2-lane roads
- 10 intersections with traffic lights that can be turned on and off before the experiment or programmatically in real-time
- 34 zebra crossings
- Static objects (buildings, parked cars, trees)
- Advertisements (programmable and can be used as visual distractions)
Drivable cars:
- small (similar to Smart Fortwo)
- medium (similar to Pontiac GTO)
- large (similar to Nissan Datsun)
Cars that are not controlled by the human participants can be instructed to follow a trajectory before the experiment or can be programmed to respond to other road users.
The coupled simulator supports a keyboard and a gaming steering wheel as input sources for the driver of the manual car, a keyboard for the passenger of the AV to control the external human-machine interface, and a motion suit for the pedestrian. At the moment, supported motion suit is Xsens Motion Suit.
The supported sources of output are a head-mounted display (HMD) and computer screen for the driver, a computer screen for the passenger, and a head-mounted display for the pedestrian. At the moment, supported HDM is Oculus Rift CV1.
The current number of human participants supported by the coupled simulator is three. However, this number can be expanded up to the number of agents supported by the network. Synchronisation in a local network is handled by a custom-made network manager designed to support the exchange of information between agents with low latency and real-time data logging at 50 Hz for variables from the Unity environment and up to 700Hz from the motion suit. The data that are logged include the three-dimensional position and rotation of the manual car and the AV, the use of blinkers by the driver of the manual car, and 150 position and angular variables from the motion suit. The data are stored in binary format, and the coupled simulator contains a function to convert the saved data into a CSV file. The host agent initiates a red bar that is displayed across all agents for 1 s to allow for visual synchronisation in case screen capture software is used.
The simualator was tested on Windows 10 and macOS Mojave. All functionality is supported by both platforms. However, support for input and output devices was tested only on Windows 10.
After checking out this project, launch Unity Hub to run the simulator with the correct version of Unity (currently 2019.3.5f1).
Select the project from the Unity Hub projects list. Wait until the project loads in. If it is not in the Unity Hub list (it is the first time you are running the project), it has to be added first - click Add and select a folder containing the project files. Once the project is loaded into the Unity editor press the Play button to run it.
To start host press Start Host button. To start the client press Start Client button, enter the host IP address and press Connect. Steps to run an experiment:
- Start host and wait for clients to join if needed.
- Once all clients have joined, on the host, select one of the experiments listed under Experiment:.
- On the host, assign roles to participants.
- On both host and clients, each participant has to select control mode.
- Start an experiment with the Start Game button.
If the user wants to prepare a build that runs simulation instantly with a selected experiment and role, he is able to do so by setting parameters of the InstantStartHostParameters struct. It can be accessed and changed on a StartScene (scene) -> Managers (game object) -> NetworkingManager (component) -> InstantStartParams (field). The struct consists of the following fields:
- SelectedExperiment: int variable which indicates a zero-based index of a selected experiment in Experiments list (field in NetworkingManager component).
- SelectedRole: int variable which indicates a zero-based index of a selected role in Roles list (field in ExperimentDefinition component) of a selected experiment prefab.
- SkipSelectionScreen: boolean variable, that if set to true makes the simulator run (as host) a selected experiment (with a selected role) right after the application is run - skipping experiment and role selection screen.
- InputMode: enum variable, that selects the display and controller pair for the instant start build. Available values are:
- Flat: use a flat-screen to display simulation and mouse&keyboard/gamepad/steering wheel to control it.
- VR: use virtual reality headset to display simulation and mouse&keyboard/gamepad/steering wheel to control it.
- Suite: use virtual reality headset to display simulation and XSense suite to control it (only pedestrian avatar).
The central point for configuring the simulator is Managers game object from the StartScene scene. It has two components:
- PlayerSystem: gathering references to player avatar prefabs,
- NetworkingManager: gathering references to experiment definitions and elements spawned during networked experiment runtime (currently only waypoint-tracking cars - AICar).
The experiment is defined solely with prefab containing the ExperimentDefinition component in the root object. To make newly created experiment selectable you have to add its prefab to Experiments list on NetworkingManager component.
To edit the experiment definition, double click the prefab in the Project window.
Prefab will be opened in edit mode along with the currently defined Regular Prefab Editing Environment. When defining the experiment it is worth setting Regular Prefab Editing Environment variable to the Unity scene defined in the experiment (Edit -> Project Settings -> Editor -> Prefab Editing Environments -> Regular Environment).
ExperimentDefinition component defines the following fields:
- Name: the name of the experiment
- Scene: Unity scene name to be loaded as an experiment environment
- Roles: list defining roles that can be taken during an experiment by participants
- Points of Interest: static points that are logged in experiment logs to be used in the log processing and analysis
- Car Spawners: references to game objects spawning non-player controlled cars
- AI Pedestrians: defines a list of PedestrianDesc structs that contain a pair of an AIPedestrian (the game object that defines an AI-controlled pedestrian avatar) and WaypointCircuit (defining a path of waypoints for the linked avatar)
Points of interest is a list of Transform references. CarSpawners list references game objects containing component inheriting from CarSpawnerBase. It defines, with overridden IEnumerator SpawnCoroutine() method, spawn sequence (see TestSyncedCarSpawner for reference implementation). Car prefabs spawned by the coroutine with AICar Spawn(CarSpawnParams parameters, bool yielding) method must be one of the referenced prefabs in AvatarPrefabDriver list on NetworkManager component.
Base.prefab from ExperimentDefinitions folder is an example experiment definition showcasing simulator features.
Roles field is a list of ExperimentRoleDefinition struct's defining experiment roles with the following base data fields:
- Name: short name/description of the role
- SpawnPoint.Point: defines where player avatar will be spawned
- SpawnPoint.Type: a type of player avatar. It may be either Pedestrian, Driver, Passenger of an autonomous car.
To add a new agent either increase the size of Roles array or duplicate existing role by right-clicking on the role name and selecting Duplicate from the context menu.
To remove the agent right click on the role name and select Delete from the context menu or decrease the list size removing end entries that don't fit the resized list.
Add a new game object to the prefab and set its position and rotation. Drag the newly created object to the SpawnPoint.Point in role definition.
Additionally to the location, camera settings can be provided for a spawned agent. CameraSetup component allows doing that. It should be added to the game object that represents a position where the avatar will be spawned (the one defined in ExperimentDefinition (component) -> Roles (list field) -> role entry on the list -> SpawnPoint (struct) -> Point (field)). The component exposes two fields:
- FieldOfView: value which is set at spawn-time to Camera.fieldOfView property.
- Rotation: value which is set at spawn-time to Transform.localRotation of a game object hosting Camera component.
No additional configuration is needed for pedestrian type agents.
For Driver role following field has to be defined:
- CarIdx - points to a car prefab on the AvatarPrefabDriver (field on PlayerSystem component) list that will be spawned for this role.
For Passenger type of agent following additional fields has to be defined:
- Car Idx - indicates car prefab that will be spawned for this role. Selected prefab is the one on the indicated index on AvatarPrefabDriver list (field on PlayerSystem component)
- TopHMI, WindshieldHMI, HoodHMI fields - define which HMI prefab to spawn on indicated spots
- AutonomusPath - references game object defining waypoints for the autonomous car via WaypointCirciut component
Paths that can be followed both by non-playable pedestrians and vehicles are defined with the WaypointCircuit component. To add waypoint press plus sign and drag waypoint Transform into the newly added field. To remove waypoint press a minus sign next to the waypoint. To reorder waypoint click up/down signs next to a waypoint. To change the position of a waypoint select waypoint transform and move it do the desired position.
Additionally, for vehicles, SpeedSetting along with Collider component might be used to further configure tracked path.
WaypointCirciut can be serialized into CSV format (semicolon separated) with an Export to file button. The following parameters are serialized: Game object
- name
- tag
- layer
Transform
- x; y; z - world position
- rotX; rotY; rotZ - world rotation (euler angles)
SpeedSettings
- waypointType
- speed
- acceleration
- jerk
- causeToYield
- lookAtPlayerWhileYielding
- lookAtPlayerAfterYielding
- yieldTime
- brakingAcceleration
- lookAtPedFromSeconds
- lookAtPedToSeconds
BoxCollider
- collider_enabled - component enable state
- isTrigger
- centerX; centerY; centerZ - box collider center
- sizeX; sizeY; sizeZ - box collider size
CSV file can be modified in any external editor and then imported with an Import from file button. Importing files will remove all current waypoint objects and replace them with newly created ones according to the data in the imported CSV file.
Initial eye contact tracking state and base tracking parameters are defined with fields in the EyeContact component. EyeContactTracking defines the initial (and current at runtime) driver's eye contact behavior while the car is not fully stopped.
- MinTrackingDistance and MaxTrackingDistance define (in meters) the range of distances at which eye contact tracking is possible. Distance is measured between the driver's head position and the pedestrian's root position (ignoring a distance on a vertical axis).
- MaxHeadRotation (in degrees) limits head movement on a vertical axis. EyeContact, if tracking is enabled, selects as the target the closest game object tagged with a "Pedestrian" tag that is within the distance range, if it meets rotation constraint (this constrain is checked when the closest object is already selected).
EyeContactRigControl is a component that consumes tracking target provided by EyeContact component and animates drivers head movement.
Eye contact behavior tracking state can be changed when the car reaches the waypoint. Behavior change is defined by the SpeedSettings - the component embedded on waypoint objects. The following four fields control those changes:
- EyeContactWhileYielding: defines how the driver will behave while the car is fully stopped
- EyeContactAfterYielding: defines how the driver will behave when the car resumes driving after a full stop. This value simply overwrites the current value of EyeContact.EyeContactTracking if the car has fully stopped.
- YieldingEyeContactSince: defines how many seconds need to pass before the driver will make eye contact (starting from the moment the car has fully stopped)
- YieldingEyeContactUntil: defines how many seconds need to pass before the driver ceases to maintain eye contact (starting from the moment the car has fully stopped)
DayNightControl component helps to define different experiment daylight conditions. It gathers lighting-related objects and allows defining two lightings presets - Day and Night, that can be quickly switched for a scene. This component is intended to be used at the experiment definition setup stage. When the development of the environment is complete, it is advised, to save the environment into two separate scenes (with different light setups) and bake lightmaps.
Creating a traffic street lights system is best started with creating an instance of ExampleStreetLightCrossSection and adjusting it. Traffic light sequence is defined in StreetLightManager component as a list of StreetLightEvents. Events are processed sequentially. Each event is defined with the following fields:
- Name: descriptive name of an event
- Delta Time: relative time that has to pass since previous event to activate the event
- CarSections: cars traffic light group that the event applies to
- PedestrianSections: pedestrian traffic light group that the event applies to
- State: state to be set on the lights specified by sections, LOOP_BACK is a special state that restarts the whole sequence
Speedometer component controls a speed indicator. To use digital display, set the Speedometer Text field in order to do that. To use analog display, set the following fields:
- Pivot - the pivot of an arrow
- PivotMinSpeedAngle - the inclination of an arrow for 0 speed
- PivotMaxSpeedAngle - the inclination of an arrow for max speed
- MaxSpeed - max speed displayed on the analog display
Connect the Asus Router with the USB Adapter to the PC and plug the Ethernet in a yellow port of the router. Let it boot up for some time while you prepare the software and the suit. Connect the Oculus Rift HDMI and USB 3.0 to the computer. Plug in the black MVN dongle in a USB 3.0 port of the computer Run the latest version of MVN Analyze. Fill out the anthropometric data and participant name by creating a new recording (1).
Check under Options -> Preferences -> Miscellaneous -> Network Streamer that the data streamer is set to Unity3D.
Note: if you find out later on that somehow the avatar looks buggy in Unity, play around with the down sampling skip factor in the Network Streamer overview to improve rendering.
Turn on the Body Pack with all the sensors connected. This will sound like one beep. Wait for 30 seconds and press the WPS button on the router. When the Body Pack of the suit beeps again, continuously press the button for 2 seconds until you hear two sequential beeps. Look if the light of the button becomes a strobe light. If this does not happen within a minute or 5, you’ve got a problem named Windows.
Delete the following software from the pc and re-install the latest version of MVN Analyze. This will re-install also the bonjour print services.
If you have an avatar in MVN Analyze and all the sensors are working, boot Unity for the simulation. See figure below: press play to launch the simulation, use the dropdown menu to select a participant and press the trial button to launch a trial.
Note. If you want to match the orientation of the Oculus to the orientation of the avatar’s head, make sure you have left-clicked the game screen in Unity and press the R-key on your keyboard. Pressing the R-key to match the visuals with the head orientation is an iterative process which requires feedback from the participants.
If you want to start a new trial, click play again at the top of the Unity screen to end the simulation. Also match the head orientations again with the R-key loop.
Always run Oculus Home software when using Oculus Rift. Otherwise you will encounter black screens. Make sure your graphics driver and USB 3.0 connections are up to date. If Oculus gives a critical hardware error, disconnect Rift and set Oculus Home software to Beta (Public Test Channel, 1st figure below) and check if Oculus Home setting is set to allow Unknown apps (2nd figure below).
The agent PCs need to be connected via a local network. If you cannot reach the host machine, try to ping it.
Inbound rules: Set correct Unity Editor version to allow all connections.
Check if supporting software is installed (e.g., Logitech gaming software G27 is used in our case). In Unity, you can check which button corresponds to your specific wheel. You can find this out by using the following asset in an empty project: https://assetstore.unity.com/packages/tools/input-management/controller-tester-43621
Then make sure you assign the correct inputs in Unity under Edit -> Project Settings -> Input (see figure below).
We have used the following free assets:
Name | Developer | License |
---|---|---|
ACP | Saarg | MIT |
Textures | Various | Creative Commons CC0 |
Oculus Integration | Oculus | Oculus SDK License |
Simple modular street Kit | Jacek Jankowski | Unity asset |
Realistic Tree 10 | Rakshi Games | Unity asset |
Small Town America - Streets | MultiFlagStudios | Unity asset |
Cars Free - Muscle Car Pack | Super Icon LTD | Depricated, Unity asset |
Mini Cargo Truck | Marcobian Games | Unity asset |
Street Bench | Rakshi Games | Unity asset |
waste bin | Lowpoly_Master | Unity asset |
Smart fortwo | Filippo Citati | MIT |