Skip to content

Commit

Permalink
Feature/kimera integration (#8)
Browse files Browse the repository at this point in the history
* Update setup.py

* Feature/set sim params (#28)

* Updated version number to 0.1.1+snapshot.

* Fix/rename (#19)

* update requirements and readme

* update installation instructions

* Update README.md

* rename baseline notebooks

* update baseline notebook; rename  ->

* Fix/update requirements (#22)

* update requirements and readme

* update installation instructions

* Update README.md

* Feature/specify mode (#23)

* add options for evaluation

* reformatting

* Update README.md

* Update README.md (#24)

* Update README.md

* Feature/refactor tasks (#25)

Feature/refactor tasks

* set sim parameters upon startup and add error checking

* clear notebook

* remove navigation notebook

* Fix typo in docs,

* Fix `%matplotlib notebook line`

* Feature/decode goseek observations (#29)

* add function to decode observations into imgs + pose

* update notebooK

* Feature/update docs (#30)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* Update goseek_full_perception.py

* Feature/update typehints (#32)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* update typehints

* update docstring

* option to change scene and random seen on reset; add custom message to signal episode reset

* moved scene change and random seed setting to gym env

* add method to get current time

* current state;  is WIP

* track number of steps in control loop

* minor refactor

* refactor

* bump version

* Update setup.py

* 0.1.4 snapshot (#39)

* Feature/update baseline defaults (#38)

* change default settings in baseline notebook and gym env

* Updated version number to 0.1.0, added setup.cfg, updated distribution statement, added unit test, added license file.

- Updated version number in setup.py to 0.1.0.
- Added setup.cfg with tox settings.
- Updated distribution statement across all source and readme files.
- Added simple unit test to verify module import.
- Added license file.

* update requirements and readme

* update installation instructions

* Update README.md

* Update README.md

* 0.1.1 snapshot (#26)

* Updated version number to 0.1.1+snapshot.

* Fix/rename (#19)

* update requirements and readme

* update installation instructions

* Update README.md

* rename baseline notebooks

* update baseline notebook; rename  ->

* Fix/update requirements (#22)

* update requirements and readme

* update installation instructions

* Update README.md

* Feature/specify mode (#23)

* add options for evaluation

* reformatting

* Update README.md

* Update README.md (#24)

* Update README.md

* Feature/refactor tasks (#25)

Feature/refactor tasks

* Remove +snapshot from version

* 0.1.2 snapshot (#36)

* Update setup.py

* Feature/set sim params (#28)

* Updated version number to 0.1.1+snapshot.

* Fix/rename (#19)

* update requirements and readme

* update installation instructions

* Update README.md

* rename baseline notebooks

* update baseline notebook; rename  ->

* Fix/update requirements (#22)

* update requirements and readme

* update installation instructions

* Update README.md

* Feature/specify mode (#23)

* add options for evaluation

* reformatting

* Update README.md

* Update README.md (#24)

* Update README.md

* Feature/refactor tasks (#25)

Feature/refactor tasks

* set sim parameters upon startup and add error checking

* clear notebook

* remove navigation notebook

* Fix typo in docs,

* Fix `%matplotlib notebook line`

* Feature/decode goseek observations (#29)

* add function to decode observations into imgs + pose

* update notebooK

* Feature/update docs (#30)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* Update goseek_full_perception.py

* Feature/update typehints (#32)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* update typehints

* update docstring

* Feature/update tesse install (#34)

* Update setup.py

* Update README.md

* Delete requirements.txt

* Update setup.py

* Fix/remove segmentation from reward (#35)

* option to change scene and random seen on reset; add custom message to signal episode reset

* moved scene change and random seed setting to gym env

* remove segmentation requirement from reward; refactor and add docs

* remove extra line

* update docstring

* minor comment update

* Update README.md

* Update README.md

* Update requirements.txt

* Change required tesse-interface version

* Update required tesse version to 0.1.0

* remove objects during reset

* added install_requires to setup.py

* remove requirements.txt

* Update README.md

* add required python version

* Update README.md

* Update README.md

* Update README.md

* remove `+snapshot`

* Update setup.py

* Feature/collision check (#37)

* Update setup.py

* Feature/set sim params (#28)

* Updated version number to 0.1.1+snapshot.

* Fix/rename (#19)

* update requirements and readme

* update installation instructions

* Update README.md

* rename baseline notebooks

* update baseline notebook; rename  ->

* Fix/update requirements (#22)

* update requirements and readme

* update installation instructions

* Update README.md

* Feature/specify mode (#23)

* add options for evaluation

* reformatting

* Update README.md

* Update README.md (#24)

* Update README.md

* Feature/refactor tasks (#25)

Feature/refactor tasks

* set sim parameters upon startup and add error checking

* clear notebook

* remove navigation notebook

* Fix typo in docs,

* Fix `%matplotlib notebook line`

* Feature/decode goseek observations (#29)

* add function to decode observations into imgs + pose

* update notebooK

* Feature/update docs (#30)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* Update goseek_full_perception.py

* Feature/update typehints (#32)

* add function to decode observations into imgs + pose

* update notebooK

* update notebook docs

* updated docs

* update typehints

* update docstring

* Feature/update tesse install (#34)

* Update setup.py

* Update README.md

* Delete requirements.txt

* Update setup.py

* Fix/remove segmentation from reward (#35)

* option to change scene and random seen on reset; add custom message to signal episode reset

* moved scene change and random seed setting to gym env

* remove segmentation requirement from reward; refactor and add docs

* remove extra line

* update docstring

* minor comment update

* Update README.md

* Update README.md

* Update requirements.txt

* Change required tesse-interface version

* Update required tesse version to 0.1.0

* remove objects during reset

* added install_requires to setup.py

* remove requirements.txt

* Update README.md

* add required python version

* Update README.md

* add collision check; refactor

* refactoring;

* refactor pd gains

* minor typo fix

* change default settings in baseline notebook and gym env

* Bump version

* Update setup.py

* bump version

* add method to get udp broadcast metadata

* use ground truth metadata from computing reward in noisy mode

* add error tmp error check on metadata time

* ensure observation after reset is synced in noisy mode

* add nonetype check

* add nonetype check to goseek full perception

* add request wrapper to rerequest from TESSE if response is dropped

* initialize controller state

* removing print

* Update setup.py

* use DataRequest to initialize pose (#49)

* use DataRequest to initialize pose

* init pose with datarequest in goseek reset

* remove duplicate _init_pose call

* Update setup.py

* Feature/reward update (#50)

* update reward to factor left cam offset

* fix collider request; revert init logic

Co-authored-by: Dan Griffith <[email protected]>
Co-authored-by: Zac Ravichandran <[email protected]>
  • Loading branch information
3 people authored Apr 14, 2020
1 parent 2a9ad6e commit f7fd73a
Show file tree
Hide file tree
Showing 5 changed files with 118 additions and 49 deletions.
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

setup(
name="tesse_gym",
version="0.1.4",
version="0.1.5",
description="TESSE OpenAI Gym python interface",
packages=find_packages("src"),
# tell setuptools that all packages will be under the 'src' directory
Expand Down
57 changes: 50 additions & 7 deletions src/tesse_gym/core/continuous_control.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@

from tesse.msgs import *
from tesse.utils import UdpListener
from tesse_gym.core.utils import TesseConnectionError

# gains 1: 150, 35, 1.6, 0.27
# gains 2: 200, 35, 1.6, 0.27
Expand Down Expand Up @@ -147,6 +148,10 @@ def parse_metadata(metadata: str) -> AgentState:


class ContinuousController:
INIT_STATE_MAX_ATTEMPTS = 5 # max attempts to read state from simulator
FORCE_X_EPS = 0.5
FORCE_Z_EPS = 0.5
TORQUE_Y_EPS = 0.001
udp_listener_rate = 200 # frequency in hz at which to listen to UDP broadcasts
collision_limit = 5 # break current control loop after this many collisions

Expand Down Expand Up @@ -208,6 +213,13 @@ def transform(
rotate_y (float): Desired rotation (in radians) relative to agent.
"""
data = self.get_data()

# if the agent's state is unknown, send small force values in the
# desired direction until the state can be acquired
if data is None:
self._init_state(translate_x, translate_z, rotate_y)
data = self.get_data()

self.set_goal(data, translate_x, translate_z, rotate_y)

last_z_err, last_z_rate_err = 0, 0
Expand All @@ -230,6 +242,38 @@ def transform(

self.set_goal(data)

def _init_state(self, translate_x, translate_z, rotate_y):
""" Initialize agent's state.
If the controller does not know the agent's state,
which happens if the simulator has not broadcast
metadata, apply small force values in the desired
direction. This will advance the agent towards the
goal while triggering metadata broadcast describing
the agent's state.
Args:
translate_x (float): Desired x translation.
translate_z (float): Desired z translation.
rotate_y (float): Desired y rotation in radians.
Raises:
TesseConnectionError: Thrown if data has not
been recieved from the simulator after
`self.INIT_STATE_MAX_ATTEMPTS`, indicating
a bad connection.
"""
force_x = np.sign(translate_x) * self.FORCE_X_EPS
force_z = np.sign(translate_z) * self.FORCE_Z_EPS
torque_y = np.sign(rotate_y) * self.TORQUE_Y_EPS

for _ in range(self.INIT_STATE_MAX_ATTEMPTS):
self.env.send(StepWithForce(force_z, torque_y, force_x))
if self.get_broadcast_metadata() is not None:
return

raise TesseConnectionError() # if the agent's state can't be established

def _in_collision(
self, force_z: float, z_pos_error: float, last_z_pos_err: float
) -> bool:
Expand Down Expand Up @@ -260,11 +304,11 @@ def _in_collision(

def get_data(self) -> AgentState:
""" Gets agent's most recent data. """
if self.last_metadata is None:
response = self.env.request(MetadataRequest()).metadata
response = self.get_broadcast_metadata()
if response is not None:
return parse_metadata(response)
else:
response = self.get_broadcast_metadata()
return parse_metadata(response)
return None

def set_goal(
self,
Expand Down Expand Up @@ -369,9 +413,8 @@ def control(self, data: AgentState) -> Tuple[float, float]:
def get_current_time(self) -> float:
""" Get current sim time. """
if self.last_metadata is None:
raise ValueError("Cannot get TESSE time, metadata is `NoneType`")
else:
return float(ET.fromstring(self.last_metadata).find("time").text)
self._init_state(0, 0, 0)
return float(ET.fromstring(self.last_metadata).find("time").text)

def get_broadcast_metadata(self) -> str:
""" Get metadata provided by TESSE UDP broadcasts. """
Expand Down
58 changes: 36 additions & 22 deletions src/tesse_gym/core/tesse_gym.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ class TesseGym(GymEnv):
)
shape = (240, 320, 3)
hover_height = 0.5
sim_query_timeout = 10

def __init__(
self,
Expand Down Expand Up @@ -144,9 +145,9 @@ def __init__(
self.steps = 0

self.env.request(SetHoverHeight(self.hover_height))
self.env.send((ColliderRequest(1)))
self.env.send(ColliderRequest(1))

# any experiment specific settings go here
# optionally adjust parameters on startup
if init_hook and self.launch_tesse:
init_hook(self)

Expand All @@ -155,7 +156,6 @@ def __init__(
self.initial_pose = np.zeros((3,))
self.initial_rotation = np.eye(2)
self.relative_pose = np.zeros((3,))
self._init_pose()

def advance_game_time(self, n_steps: int) -> None:
""" Advance game time in step mode by sending step forces of 0 to TESSE. """
Expand Down Expand Up @@ -207,6 +207,9 @@ def step(self, action: int) -> Tuple[np.ndarray, float, bool, Dict[str, Any]]:
reward, reward_info = self.compute_reward(response, action)

if reward_info["env_changed"] and not self.done:
# environment changes will not advance game time
# advance here so the perception server will be up to date
self.advance_game_time(1)
response = self.get_synced_observation()

self._update_pose(response.metadata)
Expand Down Expand Up @@ -239,8 +242,14 @@ def reset(
self.env.request(Respawn())
self.done = False
self.steps = 0
self._init_pose()
return self.form_agent_observation(self.observe())

if not self.ground_truth_mode:
observation = self.get_synced_observation()
else:
observation = self.observe()

self._init_pose(observation.metadata)
return self.form_agent_observation(observation)

def render(self, mode: str = "rgb_array") -> np.ndarray:
""" Get observation.
Expand Down Expand Up @@ -269,25 +278,29 @@ def get_synced_observation(self) -> DataResponse:
Returns:
DataResponse
"""
response = self.observe()
if self.launch_tesse or self.ground_truth_mode:
response = self.observe()
return response
else:
self.advance_game_time(1) # advance game time to capture env changes
while True:
response = self.observe()
t1 = float(
ET.fromstring(self.continuous_controller.last_metadata)
.find("time")
.text
# Ensure observations are current with sim by comparing timestamps
requery_limit = 10
time_advance_frequency = 5
for attempts in range(requery_limit):
# heuristic to account for dropped messages
if (attempts + 1) % time_advance_frequency == 0:
self.advance_game_time(1)

sim_time = self.continuous_controller.get_current_time()
observation_time = float(
ET.fromstring(response.metadata).find("time").text
)
t2 = float(ET.fromstring(response.metadata).find("time").text)
timediff = np.round(t1 - t2, 2)
timediff = np.round(sim_time - observation_time, 2)

# if observation is late, query until image server catches up.
# if observation is synced with sim time, break otherwise, requery
if timediff < 1 / self.step_rate:
break
else:
response = self.observe()

response = self.observe()
return response

def form_agent_observation(self, scene_observation: DataResponse) -> np.ndarray:
Expand Down Expand Up @@ -371,12 +384,13 @@ def _data_request(self, request_type: DataRequest, n_attempts: int = 20):

raise TesseConnectionError()

def _init_pose(self):
def _init_pose(self, metadata=None):
""" Initialize agent's starting pose """
metadata_response = self._data_request(MetadataRequest())
if metadata is None:
metadata = self._data_request(MetadataRequest()).metadata

position = self._get_agent_position(metadata_response.metadata)
rotation = self._get_agent_rotation(metadata_response.metadata)
position = self._get_agent_position(metadata)
rotation = self._get_agent_rotation(metadata)

# initialize position in in agent frame
initial_yaw = rotation[2]
Expand Down
46 changes: 31 additions & 15 deletions src/tesse_gym/tasks/goseek/goseek.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
SpawnObjectRequest,
)
from tesse_gym.core.tesse_gym import TesseGym
from tesse_gym.core.utils import NetworkConfig
from tesse_gym.core.utils import NetworkConfig, set_all_camera_params


# define custom message to signal episode reset
Expand All @@ -49,7 +49,8 @@ class EpisodeResetSignal(MetadataMessage):

class GoSeek(TesseGym):
TARGET_COLOR = (10, 138, 80)
CAMERA_FOV = 60
CAMERA_HFOV = 80
CAMERA_REL_AGENT = np.array([-0.05, 0])

def __init__(
self,
Expand All @@ -61,7 +62,7 @@ def __init__(
n_targets: Optional[int] = 30,
success_dist: Optional[float] = 2,
restart_on_collision: Optional[bool] = False,
init_hook: Optional[Callable[[TesseGym], None]] = None,
init_hook: Optional[Callable[[TesseGym], None]] = set_all_camera_params,
target_found_reward: Optional[int] = 1,
ground_truth_mode: Optional[bool] = True,
n_target_types: Optional[int] = 1,
Expand Down Expand Up @@ -143,10 +144,10 @@ def reset(
SpawnObjectRequest(i % self.n_target_types, ObjectSpawnMethod.RANDOM)
)

if self.step_mode:
self.advance_game_time(1) # respawn doesn't advance game time

self._init_pose()
# respawn doesn't advance game time
# if running an external perception server, advance game time to refresh
if not self.ground_truth_mode:
self.advance_game_time(1)

return self.form_agent_observation(self.observe())

Expand Down Expand Up @@ -180,7 +181,7 @@ def compute_reward(
Args:
observation (DataResponse): TESSE DataResponse object containing images
and metadata.
action (action_space): Action taken by agent.
action (int): Action taken by agent.
Returns:
Tuple[float, dict[str, [bool, int]]
Expand All @@ -191,10 +192,18 @@ def compute_reward(
- n_found_targets: Number of targets found during step.
"""
targets = self.env.request(ObjectsRequest())

# If not in ground truth mode, metadata will only provide position estimates
# In that case, get ground truth metadata from the controller
agent_metadata = (
observation.metadata
if self.ground_truth_mode
else self.continuous_controller.get_broadcast_metadata()
)
reward_info = {"env_changed": False, "collision": False, "n_found_targets": 0}

# compute agent's distance from targets
agent_position = self._get_agent_position(observation.metadata)
agent_position = self._get_agent_position(agent_metadata)
target_ids, target_position = self._get_target_id_and_positions(
targets.metadata
)
Expand All @@ -204,7 +213,7 @@ def compute_reward(
# check for found targets
if target_position.shape[0] > 0 and action == 3:
found_targets = self.get_found_targets(
agent_position, target_position, target_ids, observation.metadata
agent_position, target_position, target_ids, agent_metadata
)

# if targets are found, update reward and related episode info
Expand All @@ -223,6 +232,7 @@ def compute_reward(
if self.steps > self.episode_length:
self.done = True

# collision information isn't provided by the controller metadata
if self._collision(observation.metadata):
reward_info["collision"] = True

Expand Down Expand Up @@ -257,22 +267,28 @@ def get_found_targets(

# only compare (x, z) coordinates
agent_position = agent_position[np.newaxis, (0, 2)]

# get bearing and distance of targets w.r.t the left camera
# get left camera position in world coordinates
agent_orientation = self._get_agent_rotation(agent_metadata)[-1]
left_camera_position = agent_position + np.matmul(
self.get_2d_rotation_mtrx(agent_orientation), self.CAMERA_REL_AGENT
)

target_position = target_position[:, (0, 2)]
dists = np.linalg.norm(target_position - agent_position, axis=-1)
dists = np.linalg.norm(target_position - left_camera_position, axis=-1)

if dists.min() < self.success_dist:
# get positions of targets
targets_in_range = target_ids[dists < self.success_dist]
found_target_positions = target_position[dists < self.success_dist]

# get bearing of targets withing range
agent_orientation = self._get_agent_rotation(agent_metadata)[-1]
target_bearing = self.get_target_bearing(
agent_orientation, found_target_positions, agent_position
agent_orientation, found_target_positions, left_camera_position
)

# targets that meet distance and bearing requirements
found_targets = targets_in_range[np.where(target_bearing < self.CAMERA_FOV)]
found_targets = targets_in_range[np.where(target_bearing < self.CAMERA_HFOV / 2)]

return found_targets

Expand Down
4 changes: 0 additions & 4 deletions src/tesse_gym/tasks/goseek/goseek_full_perception.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@
from gym import spaces

from tesse.msgs import Camera, Channels, Compression, DataRequest, DataResponse

from tesse_gym.tasks.goseek.goseek import GoSeek


Expand Down Expand Up @@ -80,9 +79,6 @@ def form_agent_observation(self, tesse_data: DataResponse) -> np.ndarray:
axis=-1,
).reshape(-1)
pose = self.get_pose().reshape((3))

if (np.abs(pose) > 100).any():
raise ValueError("Pose is out of observation space")
return np.concatenate((observation, pose))

def observe(self) -> DataResponse:
Expand Down

0 comments on commit f7fd73a

Please sign in to comment.