add new files
This commit is contained in:
162
.gitignore
vendored
Normal file
162
.gitignore
vendored
Normal file
@@ -0,0 +1,162 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
#pdm.lock
|
||||
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
||||
# in version control.
|
||||
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
|
||||
.pdm.toml
|
||||
.pdm-python
|
||||
.pdm-build/
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# Editors
|
||||
.idea/
|
||||
.vscode
|
||||
|
||||
# logs
|
||||
logs
|
||||
23
LICENSE
Normal file
23
LICENSE
Normal file
@@ -0,0 +1,23 @@
|
||||
BSD 3-Clause License
|
||||
Copyright (c) 2016-2023 HangZhou YuShu TECHNOLOGY CO.,LTD. ("Unitree Robotics")
|
||||
All rights reserved.
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
200
README.md
200
README.md
@@ -0,0 +1,200 @@
|
||||
<div align="center">
|
||||
<h1 align="center">Unitree RL GYM</h1>
|
||||
<p align="center">
|
||||
<span> 🌎English </span> | <a href="README_zh.md"> 🇨🇳中文 </a>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<p align="center">
|
||||
<strong>This is a repository for reinforcement learning implementation based on Unitree robots, supporting Unitree Go2, H1, H1_2, and G1.</strong>
|
||||
</p>
|
||||
|
||||
<div align="center">
|
||||
|
||||
| <div align="center"> Isaac Gym </div> | <div align="center"> Mujoco </div> | <div align="center"> Physical </div> |
|
||||
|--- | --- | --- |
|
||||
| [<img src="https://oss-global-cdn.unitree.com/static/32f06dc9dfe4452dac300dda45e86b34.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/5bbc5ab1d551407080ca9d58d7bec1c8.mp4) | [<img src="https://oss-global-cdn.unitree.com/static/244cd5c4f823495fbfb67ef08f56aa33.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/5aa48535ffd641e2932c0ba45c8e7854.mp4) | [<img src="https://oss-global-cdn.unitree.com/static/78c61459d3ab41448cfdb31f6a537e8b.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) |
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation and Configuration
|
||||
|
||||
Please refer to [setup.md](/doc/setup_en.md) for installation and configuration steps.
|
||||
|
||||
## 🔁 Process Overview
|
||||
|
||||
The basic workflow for using reinforcement learning to achieve motion control is:
|
||||
|
||||
`Train` → `Play` → `Sim2Sim` → `Sim2Real`
|
||||
|
||||
- **Train**: Use the Gym simulation environment to let the robot interact with the environment and find a policy that maximizes the designed rewards. Real-time visualization during training is not recommended to avoid reduced efficiency.
|
||||
- **Play**: Use the Play command to verify the trained policy and ensure it meets expectations.
|
||||
- **Sim2Sim**: Deploy the Gym-trained policy to other simulators to ensure it’s not overly specific to Gym characteristics.
|
||||
- **Sim2Real**: Deploy the policy to a physical robot to achieve motion control.
|
||||
|
||||
## 🛠️ User Guide
|
||||
|
||||
### 1. Training
|
||||
|
||||
Run the following command to start training:
|
||||
|
||||
```bash
|
||||
python legged_gym/scripts/train.py --task=xxx
|
||||
```
|
||||
|
||||
#### ⚙️ Parameter Description
|
||||
- `--task`: Required parameter; values can be (go2, g1, h1, h1_2).
|
||||
- `--headless`: Defaults to starting with a graphical interface; set to true for headless mode (higher efficiency).
|
||||
- `--resume`: Resume training from a checkpoint in the logs.
|
||||
- `--experiment_name`: Name of the experiment to run/load.
|
||||
- `--run_name`: Name of the run to execute/load.
|
||||
- `--load_run`: Name of the run to load; defaults to the latest run.
|
||||
- `--checkpoint`: Checkpoint number to load; defaults to the latest file.
|
||||
- `--num_envs`: Number of environments for parallel training.
|
||||
- `--seed`: Random seed.
|
||||
- `--max_iterations`: Maximum number of training iterations.
|
||||
- `--sim_device`: Simulation computation device; specify CPU as `--sim_device=cpu`.
|
||||
- `--rl_device`: Reinforcement learning computation device; specify CPU as `--rl_device=cpu`.
|
||||
|
||||
**Default Training Result Directory**: `logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt`
|
||||
|
||||
---
|
||||
|
||||
### 2. Play
|
||||
|
||||
To visualize the training results in Gym, run the following command:
|
||||
|
||||
```bash
|
||||
python legged_gym/scripts/play.py --task=xxx
|
||||
```
|
||||
|
||||
**Description**:
|
||||
|
||||
- Play’s parameters are the same as Train’s.
|
||||
- By default, it loads the latest model from the experiment folder’s last run.
|
||||
- You can specify other models using `load_run` and `checkpoint`.
|
||||
|
||||
#### 💾 Export Network
|
||||
|
||||
Play exports the Actor network, saving it in `logs/{experiment_name}/exported/policies`:
|
||||
- Standard networks (MLP) are exported as `policy_1.pt`.
|
||||
- RNN networks are exported as `policy_lstm_1.pt`.
|
||||
|
||||
### Play Results
|
||||
|
||||
| Go2 | G1 | H1 | H1_2 |
|
||||
|--- | --- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/d2e8da875473457c8d5d69c3de58b24d.mp4) | [](https://oss-global-cdn.unitree.com/static/5bbc5ab1d551407080ca9d58d7bec1c8.mp4) | [](https://oss-global-cdn.unitree.com/static/522128f4640c4f348296d2761a33bf98.mp4) |[](https://oss-global-cdn.unitree.com/static/15fa46984f2343cb83342fd39f5ab7b2.mp4)|
|
||||
|
||||
---
|
||||
|
||||
### 3. Sim2Sim (Mujoco)
|
||||
|
||||
Run Sim2Sim in the Mujoco simulator:
|
||||
|
||||
```bash
|
||||
python deploy/deploy_mujoco/deploy_mujoco.py {config_name}
|
||||
```
|
||||
|
||||
#### Parameter Description
|
||||
- `config_name`: Configuration file; default search path is `deploy/deploy_mujoco/configs/`.
|
||||
|
||||
#### Example: Running G1
|
||||
|
||||
```bash
|
||||
python deploy/deploy_mujoco/deploy_mujoco.py g1.yaml
|
||||
```
|
||||
|
||||
#### ➡️ Replace Network Model
|
||||
|
||||
The default model is located at `deploy/pre_train/{robot}/motion.pt`; custom-trained models are saved in `logs/g1/exported/policies/policy_lstm_1.pt`. Update the `policy_path` in the YAML configuration file accordingly.
|
||||
|
||||
#### Simulation Results
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/5aa48535ffd641e2932c0ba45c8e7854.mp4) | [](https://oss-global-cdn.unitree.com/static/8934052becd84d08bc8c18c95849cf32.mp4) | [](https://oss-global-cdn.unitree.com/static/ee7ee85bd6d249989a905c55c7a9d305.mp4) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 4. Sim2Real (Physical Deployment)
|
||||
|
||||
Before deploying to the physical robot, ensure it’s in debug mode. Detailed steps can be found in the [Physical Deployment Guide](deploy/deploy_real/README.md):
|
||||
|
||||
```bash
|
||||
python deploy/deploy_real/deploy_real.py {net_interface} {config_name}
|
||||
```
|
||||
|
||||
|
||||
#### Parameter Description
|
||||
- `net_interface`: Network card name connected to the robot, e.g., `enp3s0`.
|
||||
- `config_name`: Configuration file located in `deploy/deploy_real/configs/`, e.g., `g1.yaml`, `h1.yaml`, `h1_2.yaml`.
|
||||
|
||||
#### Deployment Results
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) | [](https://oss-global-cdn.unitree.com/static/ea0084038d384e3eaa73b961f33e6210.mp4) | [](https://oss-global-cdn.unitree.com/static/12d041a7906e489fae79d55b091a63dd.mp4) |
|
||||
|
||||
---
|
||||
|
||||
#### Deploy with C++
|
||||
There is also an example of deploying the G1 pre-trained model in C++. the C++ code is located in the following directory.
|
||||
|
||||
```
|
||||
deploy/deploy_real/cpp_g1
|
||||
```
|
||||
|
||||
First, navigate to the directory above.
|
||||
|
||||
```base
|
||||
cd deploy/deploy_real/cpp_g1
|
||||
```
|
||||
|
||||
The C++ implementation depends on the LibTorch library, download the LibTorch
|
||||
|
||||
```bash
|
||||
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.7.1%2Bcpu.zip
|
||||
unzip libtorch-cxx11-abi-shared-with-deps-2.7.1+cpu.zip
|
||||
```
|
||||
|
||||
To build the project, executable the following steps
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
make -j4
|
||||
```
|
||||
|
||||
After successful compilation, executate the program with:
|
||||
|
||||
```base
|
||||
./g1_deploy_run {net_interface}
|
||||
```
|
||||
|
||||
Replace `{net_interface}` with your actual network interface name (e.g., eth0, wlan0).
|
||||
|
||||
## 🎉 Acknowledgments
|
||||
|
||||
This repository is built upon the support and contributions of the following open-source projects. Special thanks to:
|
||||
|
||||
- [legged\_gym](https://github.com/leggedrobotics/legged_gym): The foundation for training and running codes.
|
||||
- [rsl\_rl](https://github.com/leggedrobotics/rsl_rl.git): Reinforcement learning algorithm implementation.
|
||||
- [mujoco](https://github.com/google-deepmind/mujoco.git): Providing powerful simulation functionalities.
|
||||
- [unitree\_sdk2\_python](https://github.com/unitreerobotics/unitree_sdk2_python.git): Hardware communication interface for physical deployment.
|
||||
|
||||
---
|
||||
|
||||
## 🔖 License
|
||||
|
||||
This project is licensed under the [BSD 3-Clause License](./LICENSE):
|
||||
1. The original copyright notice must be retained.
|
||||
2. The project name or organization name may not be used for promotion.
|
||||
3. Any modifications must be disclosed.
|
||||
|
||||
For details, please read the full [LICENSE file](./LICENSE).
|
||||
|
||||
|
||||
163
README_zh.md
Normal file
163
README_zh.md
Normal file
@@ -0,0 +1,163 @@
|
||||
<div align="center">
|
||||
<h1 align="center">Unitree RL GYM</h1>
|
||||
<p align="center">
|
||||
<a href="README.md">🌎 English</a> | <span>🇨🇳 中文</span>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<p align="center">
|
||||
🎮🚪 <strong>这是一个基于 Unitree 机器人实现强化学习的示例仓库,支持 Unitree Go2、H1、H1_2和 G1。</strong> 🚪🎮
|
||||
</p>
|
||||
|
||||
<div align="center">
|
||||
|
||||
| <div align="center"> Isaac Gym </div> | <div align="center"> Mujoco </div> | <div align="center"> Physical </div> |
|
||||
|--- | --- | --- |
|
||||
| [<img src="https://oss-global-cdn.unitree.com/static/32f06dc9dfe4452dac300dda45e86b34.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/5bbc5ab1d551407080ca9d58d7bec1c8.mp4) | [<img src="https://oss-global-cdn.unitree.com/static/244cd5c4f823495fbfb67ef08f56aa33.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/5aa48535ffd641e2932c0ba45c8e7854.mp4) | [<img src="https://oss-global-cdn.unitree.com/static/78c61459d3ab41448cfdb31f6a537e8b.GIF" width="240px">](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) |
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## 📦 安装配置
|
||||
|
||||
安装和配置步骤请参考 [setup.md](/doc/setup_zh.md)
|
||||
|
||||
## 🔁 流程说明
|
||||
|
||||
强化学习实现运动控制的基本流程为:
|
||||
|
||||
`Train` → `Play` → `Sim2Sim` → `Sim2Real`
|
||||
|
||||
- **Train**: 通过 Gym 仿真环境,让机器人与环境互动,找到最满足奖励设计的策略。通常不推荐实时查看效果,以免降低训练效率。
|
||||
- **Play**: 通过 Play 命令查看训练后的策略效果,确保策略符合预期。
|
||||
- **Sim2Sim**: 将 Gym 训练完成的策略部署到其他仿真器,避免策略小众于 Gym 特性。
|
||||
- **Sim2Real**: 将策略部署到实物机器人,实现运动控制。
|
||||
|
||||
## 🛠️ 使用指南
|
||||
|
||||
### 1. 训练
|
||||
|
||||
运行以下命令进行训练:
|
||||
|
||||
```bash
|
||||
python legged_gym/scripts/train.py --task=xxx
|
||||
```
|
||||
|
||||
#### ⚙️ 参数说明
|
||||
- `--task`: 必选参数,值可选(go2, g1, h1, h1_2)
|
||||
- `--headless`: 默认启动图形界面,设为 true 时不渲染图形界面(效率更高)
|
||||
- `--resume`: 从日志中选择 checkpoint 继续训练
|
||||
- `--experiment_name`: 运行/加载的 experiment 名称
|
||||
- `--run_name`: 运行/加载的 run 名称
|
||||
- `--load_run`: 加载运行的名称,默认加载最后一次运行
|
||||
- `--checkpoint`: checkpoint 编号,默认加载最新一次文件
|
||||
- `--num_envs`: 并行训练的环境个数
|
||||
- `--seed`: 随机种子
|
||||
- `--max_iterations`: 训练的最大迭代次数
|
||||
- `--sim_device`: 仿真计算设备,指定 CPU 为 `--sim_device=cpu`
|
||||
- `--rl_device`: 强化学习计算设备,指定 CPU 为 `--rl_device=cpu`
|
||||
|
||||
**默认保存训练结果**:`logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt`
|
||||
|
||||
---
|
||||
|
||||
### 2. Play
|
||||
|
||||
如果想要在 Gym 中查看训练效果,可以运行以下命令:
|
||||
|
||||
```bash
|
||||
python legged_gym/scripts/play.py --task=xxx
|
||||
```
|
||||
|
||||
**说明**:
|
||||
|
||||
- Play 启动参数与 Train 相同。
|
||||
- 默认加载实验文件夹上次运行的最后一个模型。
|
||||
- 可通过 `load_run` 和 `checkpoint` 指定其他模型。
|
||||
|
||||
#### 💾 导出网络
|
||||
|
||||
Play 会导出 Actor 网络,保存于 `logs/{experiment_name}/exported/policies` 中:
|
||||
- 普通网络(MLP)导出为 `policy_1.pt`
|
||||
- RNN 网络,导出为 `policy_lstm_1.pt`
|
||||
|
||||
### Play 效果
|
||||
|
||||
| Go2 | G1 | H1 | H1_2 |
|
||||
|--- | --- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/d2e8da875473457c8d5d69c3de58b24d.mp4) | [](https://oss-global-cdn.unitree.com/static/5bbc5ab1d551407080ca9d58d7bec1c8.mp4) | [](https://oss-global-cdn.unitree.com/static/522128f4640c4f348296d2761a33bf98.mp4) |[](https://oss-global-cdn.unitree.com/static/15fa46984f2343cb83342fd39f5ab7b2.mp4)|
|
||||
|
||||
---
|
||||
|
||||
### 3. Sim2Sim (Mujoco)
|
||||
|
||||
支持在 Mujoco 仿真器中运行 Sim2Sim:
|
||||
|
||||
```bash
|
||||
python deploy/deploy_mujoco/deploy_mujoco.py {config_name}
|
||||
```
|
||||
|
||||
#### 参数说明
|
||||
- `config_name`: 配置文件,默认查询路径为 `deploy/deploy_mujoco/configs/`
|
||||
|
||||
#### 示例:运行 G1
|
||||
|
||||
```bash
|
||||
python deploy/deploy_mujoco/deploy_mujoco.py g1.yaml
|
||||
```
|
||||
|
||||
#### ➡️ 替换网络模型
|
||||
|
||||
默认模型位于 `deploy/pre_train/{robot}/motion.pt`;自己训练模型保存于`logs/g1/exported/policies/policy_lstm_1.pt`,只需替换 yaml 配置文件中 `policy_path`。
|
||||
|
||||
#### 运行效果
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/5aa48535ffd641e2932c0ba45c8e7854.mp4) | [](https://oss-global-cdn.unitree.com/static/8934052becd84d08bc8c18c95849cf32.mp4) | [](https://oss-global-cdn.unitree.com/static/ee7ee85bd6d249989a905c55c7a9d305.mp4) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 4. Sim2Real (实物部署)
|
||||
|
||||
实现实物部署前,确保机器人进入调试模式。详细步骤请参考 [实物部署指南](deploy/deploy_real/README.zh.md):
|
||||
|
||||
```bash
|
||||
python deploy/deploy_real/deploy_real.py {net_interface} {config_name}
|
||||
```
|
||||
|
||||
#### 参数说明
|
||||
- `net_interface`: 连接机器人网卡名称,如 `enp3s0`
|
||||
- `config_name`: 配置文件,存在于 `deploy/deploy_real/configs/`,如 `g1.yaml`,`h1.yaml`,`h1_2.yaml`
|
||||
|
||||
#### 运行效果
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) | [](https://oss-global-cdn.unitree.com/static/ea0084038d384e3eaa73b961f33e6210.mp4) | [](https://oss-global-cdn.unitree.com/static/12d041a7906e489fae79d55b091a63dd.mp4) |
|
||||
|
||||
---
|
||||
|
||||
## 🎉 致谢
|
||||
|
||||
本仓库开发离不开以下开源项目的支持与贡献,特此感谢:
|
||||
|
||||
- [legged\_gym](https://github.com/leggedrobotics/legged_gym): 构建训练与运行代码的基础。
|
||||
- [rsl\_rl](https://github.com/leggedrobotics/rsl_rl.git): 强化学习算法实现。
|
||||
- [mujoco](https://github.com/google-deepmind/mujoco.git): 提供强大仿真功能。
|
||||
- [unitree\_sdk2\_python](https://github.com/unitreerobotics/unitree_sdk2_python.git): 实物部署硬件通信接口。
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 🔖 许可证
|
||||
|
||||
本项目根据 [BSD 3-Clause License](./LICENSE) 授权:
|
||||
1. 必须保留原始版权声明。
|
||||
2. 禁止以项目名或组织名作举。
|
||||
3. 声明所有修改内容。
|
||||
|
||||
详情请阅读完整 [LICENSE 文件](./LICENSE)。
|
||||
|
||||
26
deploy/deploy_mujoco/configs/g1.yaml
Normal file
26
deploy/deploy_mujoco/configs/g1.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
#
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/g1/motion.pt"
|
||||
xml_path: "{LEGGED_GYM_ROOT_DIR}/resources/robots/g1_description/scene.xml"
|
||||
|
||||
# Total simulation time
|
||||
simulation_duration: 60.0
|
||||
# Simulation time step
|
||||
simulation_dt: 0.002
|
||||
# Controller update frequency (meets the requirement of simulation_dt * controll_decimation=0.02; 50Hz)
|
||||
control_decimation: 10
|
||||
|
||||
kps: [100, 100, 100, 150, 40, 40, 100, 100, 100, 150, 40, 40]
|
||||
kds: [2, 2, 2, 4, 2, 2, 2, 2, 2, 4, 2, 2]
|
||||
|
||||
default_angles: [-0.1, 0.0, 0.0, 0.3, -0.2, 0.0,
|
||||
-0.1, 0.0, 0.0, 0.3, -0.2, 0.0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 12
|
||||
num_obs: 47
|
||||
|
||||
cmd_init: [0.5, 0, 0]
|
||||
26
deploy/deploy_mujoco/configs/h1.yaml
Normal file
26
deploy/deploy_mujoco/configs/h1.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
#
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/h1/motion.pt"
|
||||
xml_path: "{LEGGED_GYM_ROOT_DIR}/resources/robots/h1/scene.xml"
|
||||
|
||||
# Total simulation time
|
||||
simulation_duration: 60.0
|
||||
# Simulation time step
|
||||
simulation_dt: 0.002
|
||||
# Controller update frequency (meets the requirement of simulation_dt * controll_decimation=0.02; 50Hz)
|
||||
control_decimation: 10
|
||||
|
||||
kps: [150, 150, 150, 200, 40, 150, 150, 150, 200, 40]
|
||||
kds: [2, 2, 2, 4, 2, 2, 2, 2, 4, 2]
|
||||
|
||||
default_angles: [0, 0.0, -0.1, 0.3, -0.2,
|
||||
0, 0.0, -0.1, 0.3, -0.2]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 10
|
||||
num_obs: 41
|
||||
|
||||
cmd_init: [0.5, 0, 0]
|
||||
26
deploy/deploy_mujoco/configs/h1_2.yaml
Normal file
26
deploy/deploy_mujoco/configs/h1_2.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
#
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/h1_2/motion.pt"
|
||||
xml_path: "{LEGGED_GYM_ROOT_DIR}/resources/robots/h1_2/scene.xml"
|
||||
|
||||
# Total simulation time
|
||||
simulation_duration: 60.0
|
||||
# Simulation time step
|
||||
simulation_dt: 0.002
|
||||
# Controller update frequency (meets the requirement of simulation_dt * controll_decimation=0.02; 50Hz)
|
||||
control_decimation: 10
|
||||
|
||||
kps: [200, 200, 200, 300, 40, 40, 200, 200, 200, 300, 40, 40]
|
||||
kds: [2.5, 2.5, 2.5, 4, 2, 2, 2.5, 2.5, 2.5, 4, 2, 2]
|
||||
|
||||
default_angles: [0, -0.16, 0.0, 0.36, -0.2, 0.0,
|
||||
0, -0.16, 0.0, 0.36, -0.2, 0.0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 12
|
||||
num_obs: 47
|
||||
|
||||
cmd_init: [0.5, 0, 0]
|
||||
26
deploy/deploy_mujoco/configs/sg1.yaml
Normal file
26
deploy/deploy_mujoco/configs/sg1.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
#
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/smallgray/policy_lstm_1.pt"
|
||||
xml_path: "{LEGGED_GYM_ROOT_DIR}/resources/robots/smallgray/scene1.xml"
|
||||
|
||||
# Total simulation time
|
||||
simulation_duration: 60.0
|
||||
# Simulation time step
|
||||
simulation_dt: 0.002
|
||||
# Controller update frequency (meets the requirement of simulation_dt * controll_decimation=0.02; 50Hz)
|
||||
control_decimation: 10
|
||||
|
||||
kps: [100, 100, 100, 150, 40, 40, 100, 100, 100, 150, 40, 40]
|
||||
kds: [2, 2, 2, 4, 2, 2, 2, 2, 2, 4, 2, 2]
|
||||
|
||||
default_angles: [-0.1, 0.0, 0.0, 0.3, -0.2, 0.0,
|
||||
-0.1, 0.0, 0.0, 0.3, -0.2, 0.0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 12
|
||||
num_obs: 47
|
||||
|
||||
cmd_init: [0.5, 0, 0]
|
||||
130
deploy/deploy_mujoco/deploy_mujoco.py
Normal file
130
deploy/deploy_mujoco/deploy_mujoco.py
Normal file
@@ -0,0 +1,130 @@
|
||||
import time
|
||||
|
||||
import mujoco.viewer
|
||||
import mujoco
|
||||
import numpy as np
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
import torch
|
||||
import yaml
|
||||
|
||||
|
||||
def get_gravity_orientation(quaternion):
|
||||
qw = quaternion[0]
|
||||
qx = quaternion[1]
|
||||
qy = quaternion[2]
|
||||
qz = quaternion[3]
|
||||
|
||||
gravity_orientation = np.zeros(3)
|
||||
|
||||
gravity_orientation[0] = 2 * (-qz * qx + qw * qy)
|
||||
gravity_orientation[1] = -2 * (qz * qy + qw * qx)
|
||||
gravity_orientation[2] = 1 - 2 * (qw * qw + qz * qz)
|
||||
|
||||
return gravity_orientation
|
||||
|
||||
|
||||
def pd_control(target_q, q, kp, target_dq, dq, kd):
|
||||
"""Calculates torques from position commands"""
|
||||
return (target_q - q) * kp + (target_dq - dq) * kd
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# get config file name from command line
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("config_file", type=str, help="config file name in the config folder")
|
||||
args = parser.parse_args()
|
||||
config_file = args.config_file
|
||||
with open(f"{LEGGED_GYM_ROOT_DIR}/deploy/deploy_mujoco/configs/{config_file}", "r") as f:
|
||||
config = yaml.load(f, Loader=yaml.FullLoader)
|
||||
policy_path = config["policy_path"].replace("{LEGGED_GYM_ROOT_DIR}", LEGGED_GYM_ROOT_DIR)
|
||||
xml_path = config["xml_path"].replace("{LEGGED_GYM_ROOT_DIR}", LEGGED_GYM_ROOT_DIR)
|
||||
|
||||
simulation_duration = config["simulation_duration"]
|
||||
simulation_dt = config["simulation_dt"]
|
||||
control_decimation = config["control_decimation"]
|
||||
|
||||
kps = np.array(config["kps"], dtype=np.float32)
|
||||
kds = np.array(config["kds"], dtype=np.float32)
|
||||
|
||||
default_angles = np.array(config["default_angles"], dtype=np.float32)
|
||||
|
||||
ang_vel_scale = config["ang_vel_scale"]
|
||||
dof_pos_scale = config["dof_pos_scale"]
|
||||
dof_vel_scale = config["dof_vel_scale"]
|
||||
action_scale = config["action_scale"]
|
||||
cmd_scale = np.array(config["cmd_scale"], dtype=np.float32)
|
||||
|
||||
num_actions = config["num_actions"]
|
||||
num_obs = config["num_obs"]
|
||||
|
||||
cmd = np.array(config["cmd_init"], dtype=np.float32)
|
||||
|
||||
# define context variables
|
||||
action = np.zeros(num_actions, dtype=np.float32)
|
||||
target_dof_pos = default_angles.copy()
|
||||
obs = np.zeros(num_obs, dtype=np.float32)
|
||||
|
||||
counter = 0
|
||||
|
||||
# Load robot model
|
||||
m = mujoco.MjModel.from_xml_path(xml_path)
|
||||
d = mujoco.MjData(m)
|
||||
m.opt.timestep = simulation_dt
|
||||
|
||||
# load policy
|
||||
policy = torch.jit.load(policy_path)
|
||||
|
||||
with mujoco.viewer.launch_passive(m, d) as viewer:
|
||||
# Close the viewer automatically after simulation_duration wall-seconds.
|
||||
start = time.time()
|
||||
while viewer.is_running() and time.time() - start < simulation_duration:
|
||||
step_start = time.time()
|
||||
tau = pd_control(target_dof_pos, d.qpos[7:], kps, np.zeros_like(kds), d.qvel[6:], kds)
|
||||
d.ctrl[:] = tau
|
||||
# mj_step can be replaced with code that also evaluates
|
||||
# a policy and applies a control signal before stepping the physics.
|
||||
mujoco.mj_step(m, d)
|
||||
|
||||
counter += 1
|
||||
if counter % control_decimation == 0:
|
||||
# Apply control signal here.
|
||||
|
||||
# create observation
|
||||
qj = d.qpos[7:]
|
||||
dqj = d.qvel[6:]
|
||||
quat = d.qpos[3:7]
|
||||
omega = d.qvel[3:6]
|
||||
|
||||
qj = (qj - default_angles) * dof_pos_scale
|
||||
dqj = dqj * dof_vel_scale
|
||||
gravity_orientation = get_gravity_orientation(quat)
|
||||
omega = omega * ang_vel_scale
|
||||
|
||||
period = 0.8
|
||||
count = counter * simulation_dt
|
||||
phase = count % period / period
|
||||
sin_phase = np.sin(2 * np.pi * phase)
|
||||
cos_phase = np.cos(2 * np.pi * phase)
|
||||
|
||||
obs[:3] = omega
|
||||
obs[3:6] = gravity_orientation
|
||||
obs[6:9] = cmd * cmd_scale
|
||||
obs[9 : 9 + num_actions] = qj
|
||||
obs[9 + num_actions : 9 + 2 * num_actions] = dqj
|
||||
obs[9 + 2 * num_actions : 9 + 3 * num_actions] = action
|
||||
obs[9 + 3 * num_actions : 9 + 3 * num_actions + 2] = np.array([sin_phase, cos_phase])
|
||||
obs_tensor = torch.from_numpy(obs).unsqueeze(0)
|
||||
# policy inference
|
||||
action = policy(obs_tensor).detach().numpy().squeeze()
|
||||
# transform action to target_dof_pos
|
||||
target_dof_pos = action * action_scale + default_angles
|
||||
|
||||
# Pick up changes to the physics state, apply perturbations, update options from GUI.
|
||||
viewer.sync()
|
||||
|
||||
# Rudimentary time keeping, will drift relative to wall clock.
|
||||
time_until_next_step = m.opt.timestep - (time.time() - step_start)
|
||||
if time_until_next_step > 0:
|
||||
time.sleep(time_until_next_step)
|
||||
83
deploy/deploy_real/README.md
Normal file
83
deploy/deploy_real/README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Deploy on Physical Robot
|
||||
|
||||
This code can deploy the trained network on physical robots. Currently supported robots include Unitree G1, H1, H1_2.
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) | [](https://oss-global-cdn.unitree.com/static/ea0084038d384e3eaa73b961f33e6210.mp4) | [](https://oss-global-cdn.unitree.com/static/12d041a7906e489fae79d55b091a63dd.mp4) |
|
||||
|
||||
|
||||
## Startup Usage
|
||||
|
||||
```bash
|
||||
python deploy_real.py {net_interface} {config_name}
|
||||
```
|
||||
|
||||
- `net_interface`: is the name of the network interface connected to the robot, such as `enp3s0`
|
||||
- `config_name`: is the file name of the configuration file. The configuration file will be found under `deploy/deploy_real/configs/`, such as `g1.yaml`, `h1.yaml`, `h1_2.yaml`.
|
||||
|
||||
## Startup Process
|
||||
|
||||
### 1. Start the robot
|
||||
|
||||
Start the robot in the hoisting state and wait for the robot to enter `zero torque mode`
|
||||
|
||||
### 2. Enter the debugging mode
|
||||
|
||||
Make sure the robot is in `zero torque mode`, press the `L2+R2` key combination of the remote control; the robot will enter the `debugging mode`, and the robot joints are in the damping state in the `debugging mode`.
|
||||
|
||||
|
||||
### 3. Connect the robot
|
||||
|
||||
Use an Ethernet cable to connect your computer to the network port on the robot. Modify the network configuration as follows
|
||||
|
||||
<img src="https://doc-cdn.unitree.com/static/2023/9/6/0f51cb9b12f94f0cb75070d05118c00a_980x816.jpg" width="400px">
|
||||
|
||||
Then use the `ifconfig` command to view the name of the network interface connected to the robot. Record the network interface name, which will be used as a parameter of the startup command later
|
||||
|
||||
<img src="https://oss-global-cdn.unitree.com/static/b84485f386994ef08b0ccfa928ab3830_825x484.png" width="400px">
|
||||
|
||||
### 4. Start the program
|
||||
|
||||
Assume that the network card currently connected to the physical robot is named `enp3s0`. Take the G1 robot as an example, execute the following command to start
|
||||
|
||||
```bash
|
||||
python deploy_real.py enp3s0 g1.yaml
|
||||
```
|
||||
|
||||
#### 4.1 Zero torque state
|
||||
|
||||
After starting, the robot joints will be in the zero torque state. You can shake the robot joints by hand to feel and confirm.
|
||||
|
||||
#### 4.2 Default position state
|
||||
|
||||
In the zero torque state, press the `start` button on the remote control, and the robot will move to the default joint position state.
|
||||
|
||||
After the robot moves to the default joint position, you can slowly lower the hoisting mechanism to let the robot's feet touch the ground.
|
||||
|
||||
#### 4.3 Motion control mode
|
||||
|
||||
After the preparation is completed, press the `A` button on the remote control, and the robot will step on the spot. After the robot is stable, you can gradually lower the hoisting rope to give the robot a certain amount of space to move.
|
||||
|
||||
At this time, you can use the joystick on the remote control to control the movement of the robot.
|
||||
The front and back of the left joystick controls the movement speed of the robot in the x direction
|
||||
The left and right of the left joystick controls the movement speed of the robot in the y direction
|
||||
The left and right of the right joystick controls the movement speed of the robot's yaw angle
|
||||
|
||||
#### 4.4 Exit control
|
||||
|
||||
In motion control mode, press the `select` button on the remote control, the robot will enter the damping mode and fall down, and the program will exit. Or use `ctrl+c` in the terminal to close the program.
|
||||
|
||||
> Note:
|
||||
>
|
||||
> Since this example deployment is not a stable control program and is only used for demonstration purposes, please try not to disturb the robot during the control process. If any unexpected situation occurs during the control process, please exit the control in time to avoid danger.
|
||||
|
||||
## Video tutorial
|
||||
|
||||
|
||||
deploy on G1 robot: [https://oss-global-cdn.unitree.com/static/ea12b7fb3bb641b3ada9c91d44d348db.mp4](https://oss-global-cdn.unitree.com/static/ea12b7fb3bb641b3ada9c91d44d348db.mp4)
|
||||
|
||||
deploy on H1 robot: [https://oss-global-cdn.unitree.com/static/d4e0cdd8ce0d477fb5bffbb1ac5ef69d.mp4](https://oss-global-cdn.unitree.com/static/d4e0cdd8ce0d477fb5bffbb1ac5ef69d.mp4)
|
||||
|
||||
deploy on H1_2 robot: [https://oss-global-cdn.unitree.com/static/8120e35452404923b854d9e98bdac951.mp4](https://oss-global-cdn.unitree.com/static/8120e35452404923b854d9e98bdac951.mp4)
|
||||
|
||||
79
deploy/deploy_real/README.zh.md
Normal file
79
deploy/deploy_real/README.zh.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# 实物部署
|
||||
|
||||
本代码可以在实物部署训练的网络。目前支持的机器人包括 Unitree G1, H1, H1_2。
|
||||
|
||||
| G1 | H1 | H1_2 |
|
||||
|--- | --- | --- |
|
||||
| [](https://oss-global-cdn.unitree.com/static/0818dcf7a6874b92997354d628adcacd.mp4) | [](https://oss-global-cdn.unitree.com/static/ea0084038d384e3eaa73b961f33e6210.mp4) | [](https://oss-global-cdn.unitree.com/static/12d041a7906e489fae79d55b091a63dd.mp4) |
|
||||
|
||||
## 启动用法
|
||||
|
||||
```bash
|
||||
python deploy_real.py {net_interface} {config_name}
|
||||
```
|
||||
|
||||
- `net_interface`: 为连接机器人的网卡的名字,例如`enp3s0`
|
||||
- `config_name`: 配置文件的文件名。配置文件会在 `deploy/deploy_real/configs/` 下查找, 例如`g1.yaml`, `h1.yaml`, `h1_2.yaml`。
|
||||
|
||||
## 启动过程
|
||||
|
||||
### 1. 启动机器人
|
||||
|
||||
将机器人在吊装状态下启动,并等待机器人进入 `零力矩模式`
|
||||
|
||||
### 2. 进入调试模式
|
||||
|
||||
确保机器人处于 `零力矩模式` 的情况下,按下遥控器的 `L2+R2`组合键;此时机器人会进入`调试模式`, `调试模式`下机器人关节处于阻尼状态。
|
||||
|
||||
### 3. 连接机器人
|
||||
|
||||
使用网线连接自己的电脑和机器人上的网口。修改网络配置如下
|
||||
|
||||
<img src="https://doc-cdn.unitree.com/static/2023/9/6/0f51cb9b12f94f0cb75070d05118c00a_980x816.jpg" width="400px">
|
||||
|
||||
然后使用 `ifconfig` 命令查看与机器人连接的网卡的名称。网卡名称记录下来,后面会作为启动命令的参数
|
||||
|
||||
<img src="https://oss-global-cdn.unitree.com/static/b84485f386994ef08b0ccfa928ab3830_825x484.png" width="400px">
|
||||
|
||||
### 4. 启动程序
|
||||
|
||||
假设目前与实物机器人连接的网卡名为`enp3s0`.以G1机器人为例,执行下面的命令启动
|
||||
|
||||
```bash
|
||||
python deploy_real.py enp3s0 g1.yaml
|
||||
```
|
||||
|
||||
#### 4.1 零力矩状态
|
||||
|
||||
启动之后,机器人关节会处于零力矩状态,可以用手晃动机器人的关节感受并确认一下。
|
||||
|
||||
#### 4.2 默认位置状态
|
||||
|
||||
在零力矩状态时,按下遥控器上的`start`按键,机器人会运动到默认关节位置状态。
|
||||
|
||||
在机器人运动到默认关节位置之后,可以缓慢的下放吊装机构,让机器人的脚与地面接触。
|
||||
|
||||
#### 4.3 运动控制模式
|
||||
|
||||
准备工作完成,按下遥控器上`A`键,机器人此时会原地踏步,在机器人状态稳定之后,可以逐渐降低吊装绳,给机器人一定的活动空间。
|
||||
|
||||
此时使用遥控器上的摇杆就可以控制机器人的运动了。
|
||||
左摇杆的前后,控制机器人的x方向的运动速度
|
||||
左摇杆的左右,控制机器人的y方向的运动速度
|
||||
右摇杆的左右,控制机器人的偏航角yaw的运动速度
|
||||
|
||||
#### 4.4 退出控制
|
||||
|
||||
在运动控制模式下,按下遥控器上 `select` 按键,机器人会进入阻尼模式倒下,程序退出。或者在终端中 使用 `ctrl+c` 关闭程序。
|
||||
|
||||
> 注意:
|
||||
>
|
||||
> 由于本示例部署并非稳定的控制程序,仅用于示例作用,请控制过程尽量不要给机器人施加扰动,如果控制过程中出现任何意外情况,请及时退出控制,以免发生危险。
|
||||
|
||||
## 视频教程
|
||||
|
||||
部署在G1上:[https://oss-global-cdn.unitree.com/static/ff70a257ddf34adbb67733d8a90b24d4.mp4](https://oss-global-cdn.unitree.com/static/ff70a257ddf34adbb67733d8a90b24d4.mp4)
|
||||
|
||||
部署在H1上:[https://oss-global-cdn.unitree.com/static/6dae1756f4214409a5ced7386ca011ae.mp4](https://oss-global-cdn.unitree.com/static/6dae1756f4214409a5ced7386ca011ae.mp4)
|
||||
|
||||
部署在H1_2上:[https://oss-global-cdn.unitree.com/static/9d61a1470d3e4b9a9af6d131324fcb94.mp4](https://oss-global-cdn.unitree.com/static/9d61a1470d3e4b9a9af6d131324fcb94.mp4)
|
||||
61
deploy/deploy_real/common/command_helper.py
Normal file
61
deploy/deploy_real/common/command_helper.py
Normal file
@@ -0,0 +1,61 @@
|
||||
from unitree_sdk2py.idl.unitree_go.msg.dds_ import LowCmd_ as LowCmdGo
|
||||
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowCmd_ as LowCmdHG
|
||||
from typing import Union
|
||||
|
||||
|
||||
class MotorMode:
|
||||
PR = 0 # Series Control for Pitch/Roll Joints
|
||||
AB = 1 # Parallel Control for A/B Joints
|
||||
|
||||
|
||||
def create_damping_cmd(cmd: Union[LowCmdGo, LowCmdHG]):
|
||||
size = len(cmd.motor_cmd)
|
||||
for i in range(size):
|
||||
cmd.motor_cmd[i].q = 0
|
||||
cmd.motor_cmd[i].qd = 0
|
||||
cmd.motor_cmd[i].kp = 0
|
||||
cmd.motor_cmd[i].kd = 8
|
||||
cmd.motor_cmd[i].tau = 0
|
||||
|
||||
|
||||
def create_zero_cmd(cmd: Union[LowCmdGo, LowCmdHG]):
|
||||
size = len(cmd.motor_cmd)
|
||||
for i in range(size):
|
||||
cmd.motor_cmd[i].q = 0
|
||||
cmd.motor_cmd[i].qd = 0
|
||||
cmd.motor_cmd[i].kp = 0
|
||||
cmd.motor_cmd[i].kd = 0
|
||||
cmd.motor_cmd[i].tau = 0
|
||||
|
||||
|
||||
def init_cmd_hg(cmd: LowCmdHG, mode_machine: int, mode_pr: int):
|
||||
cmd.mode_machine = mode_machine
|
||||
cmd.mode_pr = mode_pr
|
||||
size = len(cmd.motor_cmd)
|
||||
for i in range(size):
|
||||
cmd.motor_cmd[i].mode = 1
|
||||
cmd.motor_cmd[i].q = 0
|
||||
cmd.motor_cmd[i].qd = 0
|
||||
cmd.motor_cmd[i].kp = 0
|
||||
cmd.motor_cmd[i].kd = 0
|
||||
cmd.motor_cmd[i].tau = 0
|
||||
|
||||
|
||||
def init_cmd_go(cmd: LowCmdGo, weak_motor: list):
|
||||
cmd.head[0] = 0xFE
|
||||
cmd.head[1] = 0xEF
|
||||
cmd.level_flag = 0xFF
|
||||
cmd.gpio = 0
|
||||
PosStopF = 2.146e9
|
||||
VelStopF = 16000.0
|
||||
size = len(cmd.motor_cmd)
|
||||
for i in range(size):
|
||||
if i in weak_motor:
|
||||
cmd.motor_cmd[i].mode = 1
|
||||
else:
|
||||
cmd.motor_cmd[i].mode = 0x0A
|
||||
cmd.motor_cmd[i].q = PosStopF
|
||||
cmd.motor_cmd[i].qd = VelStopF
|
||||
cmd.motor_cmd[i].kp = 0
|
||||
cmd.motor_cmd[i].kd = 0
|
||||
cmd.motor_cmd[i].tau = 0
|
||||
39
deploy/deploy_real/common/remote_controller.py
Normal file
39
deploy/deploy_real/common/remote_controller.py
Normal file
@@ -0,0 +1,39 @@
|
||||
import struct
|
||||
|
||||
|
||||
class KeyMap:
|
||||
R1 = 0
|
||||
L1 = 1
|
||||
start = 2
|
||||
select = 3
|
||||
R2 = 4
|
||||
L2 = 5
|
||||
F1 = 6
|
||||
F2 = 7
|
||||
A = 8
|
||||
B = 9
|
||||
X = 10
|
||||
Y = 11
|
||||
up = 12
|
||||
right = 13
|
||||
down = 14
|
||||
left = 15
|
||||
|
||||
|
||||
class RemoteController:
|
||||
def __init__(self):
|
||||
self.lx = 0
|
||||
self.ly = 0
|
||||
self.rx = 0
|
||||
self.ry = 0
|
||||
self.button = [0] * 16
|
||||
|
||||
def set(self, data):
|
||||
# wireless_remote
|
||||
keys = struct.unpack("H", data[2:4])[0]
|
||||
for i in range(16):
|
||||
self.button[i] = (keys & (1 << i)) >> i
|
||||
self.lx = struct.unpack("f", data[4:8])[0]
|
||||
self.rx = struct.unpack("f", data[8:12])[0]
|
||||
self.ry = struct.unpack("f", data[12:16])[0]
|
||||
self.ly = struct.unpack("f", data[20:24])[0]
|
||||
25
deploy/deploy_real/common/rotation_helper.py
Normal file
25
deploy/deploy_real/common/rotation_helper.py
Normal file
@@ -0,0 +1,25 @@
|
||||
import numpy as np
|
||||
from scipy.spatial.transform import Rotation as R
|
||||
|
||||
|
||||
def get_gravity_orientation(quaternion):
|
||||
qw = quaternion[0]
|
||||
qx = quaternion[1]
|
||||
qy = quaternion[2]
|
||||
qz = quaternion[3]
|
||||
|
||||
gravity_orientation = np.zeros(3)
|
||||
|
||||
gravity_orientation[0] = 2 * (-qz * qx + qw * qy)
|
||||
gravity_orientation[1] = -2 * (qz * qy + qw * qx)
|
||||
gravity_orientation[2] = 1 - 2 * (qw * qw + qz * qz)
|
||||
|
||||
return gravity_orientation
|
||||
|
||||
|
||||
def transform_imu_data(waist_yaw, waist_yaw_omega, imu_quat, imu_omega):
|
||||
RzWaist = R.from_euler("z", waist_yaw).as_matrix()
|
||||
R_torso = R.from_quat([imu_quat[1], imu_quat[2], imu_quat[3], imu_quat[0]]).as_matrix()
|
||||
R_pelvis = np.dot(R_torso, RzWaist.T)
|
||||
w = np.dot(RzWaist, imu_omega[0]) - np.array([0, 0, waist_yaw_omega])
|
||||
return R.from_matrix(R_pelvis).as_quat()[[3, 0, 1, 2]], w
|
||||
43
deploy/deploy_real/config.py
Normal file
43
deploy/deploy_real/config.py
Normal file
@@ -0,0 +1,43 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
import numpy as np
|
||||
import yaml
|
||||
|
||||
|
||||
class Config:
|
||||
def __init__(self, file_path) -> None:
|
||||
with open(file_path, "r") as f:
|
||||
config = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
self.control_dt = config["control_dt"]
|
||||
|
||||
self.msg_type = config["msg_type"]
|
||||
self.imu_type = config["imu_type"]
|
||||
|
||||
self.weak_motor = []
|
||||
if "weak_motor" in config:
|
||||
self.weak_motor = config["weak_motor"]
|
||||
|
||||
self.lowcmd_topic = config["lowcmd_topic"]
|
||||
self.lowstate_topic = config["lowstate_topic"]
|
||||
|
||||
self.policy_path = config["policy_path"].replace("{LEGGED_GYM_ROOT_DIR}", LEGGED_GYM_ROOT_DIR)
|
||||
|
||||
self.leg_joint2motor_idx = config["leg_joint2motor_idx"]
|
||||
self.kps = config["kps"]
|
||||
self.kds = config["kds"]
|
||||
self.default_angles = np.array(config["default_angles"], dtype=np.float32)
|
||||
|
||||
self.arm_waist_joint2motor_idx = config["arm_waist_joint2motor_idx"]
|
||||
self.arm_waist_kps = config["arm_waist_kps"]
|
||||
self.arm_waist_kds = config["arm_waist_kds"]
|
||||
self.arm_waist_target = np.array(config["arm_waist_target"], dtype=np.float32)
|
||||
|
||||
self.ang_vel_scale = config["ang_vel_scale"]
|
||||
self.dof_pos_scale = config["dof_pos_scale"]
|
||||
self.dof_vel_scale = config["dof_vel_scale"]
|
||||
self.action_scale = config["action_scale"]
|
||||
self.cmd_scale = np.array(config["cmd_scale"], dtype=np.float32)
|
||||
self.max_cmd = np.array(config["max_cmd"], dtype=np.float32)
|
||||
|
||||
self.num_actions = config["num_actions"]
|
||||
self.num_obs = config["num_obs"]
|
||||
42
deploy/deploy_real/configs/g1.yaml
Normal file
42
deploy/deploy_real/configs/g1.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
#
|
||||
control_dt: 0.02
|
||||
|
||||
msg_type: "hg" # "hg" or "go"
|
||||
imu_type: "pelvis" # "torso" or "pelvis"
|
||||
|
||||
lowcmd_topic: "rt/lowcmd"
|
||||
lowstate_topic: "rt/lowstate"
|
||||
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/g1/motion.pt"
|
||||
|
||||
leg_joint2motor_idx: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
|
||||
kps: [100, 100, 100, 150, 40, 40, 100, 100, 100, 150, 40, 40]
|
||||
kds: [2, 2, 2, 4, 2, 2, 2, 2, 2, 4, 2, 2]
|
||||
default_angles: [-0.1, 0.0, 0.0, 0.3, -0.2, 0.0,
|
||||
-0.1, 0.0, 0.0, 0.3, -0.2, 0.0]
|
||||
|
||||
arm_waist_joint2motor_idx: [12, 13, 14,
|
||||
15, 16, 17, 18, 19, 20, 21,
|
||||
22, 23, 24, 25, 26, 27, 28]
|
||||
|
||||
arm_waist_kps: [300, 300, 300,
|
||||
100, 100, 50, 50, 20, 20, 20,
|
||||
100, 100, 50, 50, 20, 20, 20]
|
||||
|
||||
arm_waist_kds: [3, 3, 3,
|
||||
2, 2, 2, 2, 1, 1, 1,
|
||||
2, 2, 2, 2, 1, 1, 1]
|
||||
|
||||
arm_waist_target: [ 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 12
|
||||
num_obs: 47
|
||||
|
||||
max_cmd: [0.8, 0.5, 1.57]
|
||||
44
deploy/deploy_real/configs/h1.yaml
Normal file
44
deploy/deploy_real/configs/h1.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
#
|
||||
control_dt: 0.02
|
||||
|
||||
msg_type: "go" # "hg" or "go"
|
||||
imu_type: "torso" # "torso" or "pelvis"
|
||||
|
||||
weak_motor: [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
|
||||
|
||||
lowcmd_topic: "rt/lowcmd"
|
||||
lowstate_topic: "rt/lowstate"
|
||||
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/h1/motion.pt"
|
||||
|
||||
leg_joint2motor_idx: [7, 3, 4, 5, 10, 8, 0, 1, 2, 11]
|
||||
kps: [150, 150, 150, 200, 40, 150, 150, 150, 200, 40]
|
||||
kds: [2, 2, 2, 4, 2, 2, 2, 2, 4, 2]
|
||||
default_angles: [0, 0.0, -0.1, 0.3, -0.2,
|
||||
0, 0.0, -0.1, 0.3, -0.2]
|
||||
|
||||
arm_waist_joint2motor_idx: [6,
|
||||
16, 17, 18, 19,
|
||||
12, 13, 14, 15]
|
||||
|
||||
arm_waist_kps: [300,
|
||||
100, 100, 50, 50,
|
||||
100, 100, 50, 50]
|
||||
|
||||
arm_waist_kds: [3,
|
||||
2, 2, 2, 2,
|
||||
2, 2, 2, 2]
|
||||
|
||||
arm_waist_target: [ 0,
|
||||
0, 0, 0, 0,
|
||||
0, 0, 0, 0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 10
|
||||
num_obs: 41
|
||||
|
||||
max_cmd: [0.8, 0.5, 1.57]
|
||||
45
deploy/deploy_real/configs/h1_2.yaml
Normal file
45
deploy/deploy_real/configs/h1_2.yaml
Normal file
@@ -0,0 +1,45 @@
|
||||
#
|
||||
control_dt: 0.02
|
||||
|
||||
msg_type: "hg" # "hg" or "go"
|
||||
imu_type: "torso" # "torso" or "pelvis"
|
||||
|
||||
lowcmd_topic: "rt/lowcmd"
|
||||
lowstate_topic: "rt/lowstate"
|
||||
|
||||
policy_path: "{LEGGED_GYM_ROOT_DIR}/deploy/pre_train/h1_2/motion.pt"
|
||||
|
||||
leg_joint2motor_idx: [0, 1, 2, 3, 4, 5,
|
||||
6, 7, 8, 9, 10, 11]
|
||||
kps: [200, 200, 200, 300, 40, 40,
|
||||
200, 200, 200, 300, 40, 40]
|
||||
kds: [2.5, 2.5, 2.5, 4, 2, 2,
|
||||
2.5, 2.5, 2.5, 4, 2, 2]
|
||||
default_angles: [0, -0.16, 0.0, 0.36, -0.2, 0.0,
|
||||
0, -0.16, 0.0, 0.36, -0.2, 0.0]
|
||||
|
||||
arm_waist_joint2motor_idx: [12,
|
||||
13, 14, 15, 16, 17, 18, 19,
|
||||
20, 21, 22, 23, 24, 25, 26]
|
||||
|
||||
arm_waist_kps: [300,
|
||||
120, 120, 120, 80, 80, 80, 80,
|
||||
120, 120, 120, 80, 80, 80, 80]
|
||||
|
||||
arm_waist_kds: [3,
|
||||
2, 2, 2, 1, 1, 1, 1,
|
||||
2, 2, 2, 1, 1, 1, 1]
|
||||
|
||||
arm_waist_target: [ 0,
|
||||
0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0]
|
||||
|
||||
ang_vel_scale: 0.25
|
||||
dof_pos_scale: 1.0
|
||||
dof_vel_scale: 0.05
|
||||
action_scale: 0.25
|
||||
cmd_scale: [2.0, 2.0, 0.25]
|
||||
num_actions: 12
|
||||
num_obs: 47
|
||||
|
||||
max_cmd: [0.8, 0.5, 1.57]
|
||||
49
deploy/deploy_real/cpp_g1/AtomicLock.h
Normal file
49
deploy/deploy_real/cpp_g1/AtomicLock.h
Normal file
@@ -0,0 +1,49 @@
|
||||
#ifndef ATOMIC_LOCK_H
|
||||
#define ATOMIC_LOCK_H
|
||||
|
||||
#include <atomic>
|
||||
|
||||
class AFLock
|
||||
{
|
||||
public:
|
||||
AFLock() = default;
|
||||
~AFLock() = default;
|
||||
|
||||
void Lock()
|
||||
{
|
||||
while (mAFL.test_and_set(std::memory_order_acquire));
|
||||
}
|
||||
|
||||
bool TryLock()
|
||||
{
|
||||
return mAFL.test_and_set(std::memory_order_acquire) ? false : true;
|
||||
}
|
||||
|
||||
void Unlock()
|
||||
{
|
||||
mAFL.clear(std::memory_order_release);
|
||||
}
|
||||
|
||||
private:
|
||||
std::atomic_flag mAFL = ATOMIC_FLAG_INIT;
|
||||
};
|
||||
|
||||
template<typename L>
|
||||
class ScopedLock
|
||||
{
|
||||
public:
|
||||
explicit ScopedLock(L& lock) :
|
||||
mLock(lock)
|
||||
{
|
||||
mLock.Lock();
|
||||
}
|
||||
|
||||
~ScopedLock()
|
||||
{
|
||||
mLock.Unlock();
|
||||
}
|
||||
|
||||
private:
|
||||
L& mLock;
|
||||
};
|
||||
#endif
|
||||
19
deploy/deploy_real/cpp_g1/CMakeLists.txt
Normal file
19
deploy/deploy_real/cpp_g1/CMakeLists.txt
Normal file
@@ -0,0 +1,19 @@
|
||||
cmake_minimum_required(VERSION 3.16)
|
||||
project(g1_deploy_real)
|
||||
|
||||
set(CMAKE_BUILD_TYPE Debug)
|
||||
|
||||
include_directories(/usr/local/include/ddscxx)
|
||||
include_directories(/usr/local/include/iceoryx/v2.0.2)
|
||||
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
|
||||
set(CMAKE_PREFIX_PATH "${CMAKE_CURRENT_LIST_DIR}/libtorch/share/cmake")
|
||||
find_package(Torch REQUIRED)
|
||||
include_directories(${TORCH_INCLUDE_DIRS})
|
||||
|
||||
aux_source_directory(. SRC_LIST)
|
||||
add_executable(${PROJECT_NAME} ${SRC_LIST})
|
||||
|
||||
find_package(yaml-cpp REQUIRED)
|
||||
target_link_libraries(${PROJECT_NAME} unitree_sdk2 ddsc ddscxx ${TORCH_LIBRARIES} yaml-cpp::yaml-cpp pthread)
|
||||
250
deploy/deploy_real/cpp_g1/Controller.cpp
Normal file
250
deploy/deploy_real/cpp_g1/Controller.cpp
Normal file
@@ -0,0 +1,250 @@
|
||||
#include "Controller.h"
|
||||
#include <yaml-cpp/yaml.h>
|
||||
#include <algorithm>
|
||||
#include <thread>
|
||||
#include "utilities.h"
|
||||
|
||||
#define TOPIC_LOWCMD "rt/lowcmd"
|
||||
#define TOPIC_LOWSTATE "rt/lowstate"
|
||||
|
||||
Controller::Controller(const std::string &net_interface)
|
||||
{
|
||||
YAML::Node yaml_node = YAML::LoadFile("../../configs/g1.yaml");
|
||||
|
||||
leg_joint2motor_idx = yaml_node["leg_joint2motor_idx"].as<std::vector<float>>();
|
||||
kps = yaml_node["kps"].as<std::vector<float>>();
|
||||
kds = yaml_node["kds"].as<std::vector<float>>();
|
||||
default_angles = yaml_node["default_angles"].as<std::vector<float>>();
|
||||
arm_waist_joint2motor_idx = yaml_node["arm_waist_joint2motor_idx"].as<std::vector<float>>();
|
||||
arm_waist_kps = yaml_node["arm_waist_kps"].as<std::vector<float>>();
|
||||
arm_waist_kds = yaml_node["arm_waist_kds"].as<std::vector<float>>();
|
||||
arm_waist_target = yaml_node["arm_waist_target"].as<std::vector<float>>();
|
||||
ang_vel_scale = yaml_node["ang_vel_scale"].as<float>();
|
||||
dof_pos_scale = yaml_node["dof_pos_scale"].as<float>();
|
||||
dof_vel_scale = yaml_node["dof_vel_scale"].as<float>();
|
||||
action_scale = yaml_node["action_scale"].as<float>();
|
||||
cmd_scale = yaml_node["cmd_scale"].as<std::vector<float>>();
|
||||
num_actions = yaml_node["num_actions"].as<float>();
|
||||
num_obs = yaml_node["num_obs"].as<float>();
|
||||
max_cmd = yaml_node["max_cmd"].as<std::vector<float>>();
|
||||
|
||||
obs.setZero(num_obs);
|
||||
act.setZero(num_actions);
|
||||
|
||||
module = torch::jit::load("../../../pre_train/g1/motion.pt");
|
||||
|
||||
unitree::robot::ChannelFactory::Instance()->Init(0, net_interface);
|
||||
|
||||
lowcmd_publisher.reset(new unitree::robot::ChannelPublisher<unitree_hg::msg::dds_::LowCmd_>(TOPIC_LOWCMD));
|
||||
lowstate_subscriber.reset(new unitree::robot::ChannelSubscriber<unitree_hg::msg::dds_::LowState_>(TOPIC_LOWSTATE));
|
||||
|
||||
lowcmd_publisher->InitChannel();
|
||||
lowstate_subscriber->InitChannel(std::bind(&Controller::low_state_message_handler, this, std::placeholders::_1));
|
||||
|
||||
while (!mLowStateBuf.GetDataPtr())
|
||||
{
|
||||
usleep(100000);
|
||||
}
|
||||
|
||||
low_cmd_write_thread_ptr = unitree::common::CreateRecurrentThreadEx("low_cmd_write", UT_CPU_ID_NONE, 2000, &Controller::low_cmd_write_handler, this);
|
||||
std::cout << "Controller init done!\n";
|
||||
}
|
||||
|
||||
void Controller::zero_torque_state()
|
||||
{
|
||||
const std::chrono::milliseconds cycle_time(20);
|
||||
auto next_cycle = std::chrono::steady_clock::now();
|
||||
|
||||
std::cout << "zero_torque_state, press start\n";
|
||||
while (!joy.btn.components.start)
|
||||
{
|
||||
auto low_cmd = std::make_shared<unitree_hg::msg::dds_::LowCmd_>();
|
||||
|
||||
for (auto &cmd : low_cmd->motor_cmd())
|
||||
{
|
||||
cmd.q() = 0;
|
||||
cmd.dq() = 0;
|
||||
cmd.kp() = 0;
|
||||
cmd.kd() = 0;
|
||||
cmd.tau() = 0;
|
||||
}
|
||||
|
||||
mLowCmdBuf.SetDataPtr(low_cmd);
|
||||
|
||||
next_cycle += cycle_time;
|
||||
std::this_thread::sleep_until(next_cycle);
|
||||
}
|
||||
}
|
||||
|
||||
void Controller::move_to_default_pos()
|
||||
{
|
||||
std::cout << "move_to_default_pos, press A\n";
|
||||
const std::chrono::milliseconds cycle_time(20);
|
||||
auto next_cycle = std::chrono::steady_clock::now();
|
||||
|
||||
auto low_state = mLowStateBuf.GetDataPtr();
|
||||
std::array<float, 35> jpos;
|
||||
for (int i = 0; i < 35; i++)
|
||||
{
|
||||
jpos[i] = low_state->motor_state()[i].q();
|
||||
}
|
||||
|
||||
int num_steps = 100;
|
||||
int count = 0;
|
||||
|
||||
while (count <= num_steps || !joy.btn.components.A)
|
||||
{
|
||||
auto low_cmd = std::make_shared<unitree_hg::msg::dds_::LowCmd_>();
|
||||
float phase = std::clamp<float>(float(count++) / num_steps, 0, 1);
|
||||
|
||||
// leg
|
||||
for (int i = 0; i < 12; i++)
|
||||
{
|
||||
low_cmd->motor_cmd()[i].q() = (1 - phase) * jpos[i] + phase * default_angles[i];
|
||||
low_cmd->motor_cmd()[i].kp() = kps[i];
|
||||
low_cmd->motor_cmd()[i].kd() = kds[i];
|
||||
low_cmd->motor_cmd()[i].tau() = 0.0;
|
||||
low_cmd->motor_cmd()[i].dq() = 0.0;
|
||||
}
|
||||
|
||||
// waist arm
|
||||
for (int i = 12; i < 29; i++)
|
||||
{
|
||||
low_cmd->motor_cmd()[i].q() = (1 - phase) * jpos[i] + phase * arm_waist_target[i - 12];
|
||||
low_cmd->motor_cmd()[i].kp() = arm_waist_kps[i - 12];
|
||||
low_cmd->motor_cmd()[i].kd() = arm_waist_kds[i - 12];
|
||||
low_cmd->motor_cmd()[i].tau() = 0.0;
|
||||
low_cmd->motor_cmd()[i].dq() = 0.0;
|
||||
}
|
||||
|
||||
mLowCmdBuf.SetDataPtr(low_cmd);
|
||||
|
||||
next_cycle += cycle_time;
|
||||
std::this_thread::sleep_until(next_cycle);
|
||||
}
|
||||
}
|
||||
|
||||
void Controller::run()
|
||||
{
|
||||
std::cout << "run controller, press select\n";
|
||||
|
||||
const std::chrono::milliseconds cycle_time(20);
|
||||
auto next_cycle = std::chrono::steady_clock::now();
|
||||
|
||||
float period = .8;
|
||||
float time = 0;
|
||||
|
||||
while (!joy.btn.components.select)
|
||||
{
|
||||
auto low_state = mLowStateBuf.GetDataPtr();
|
||||
// obs
|
||||
Eigen::Matrix3f R = Eigen::Quaternionf(low_state->imu_state().quaternion()[0], low_state->imu_state().quaternion()[1], low_state->imu_state().quaternion()[2], low_state->imu_state().quaternion()[3]).toRotationMatrix();
|
||||
|
||||
for (int i = 0; i < 3; i++)
|
||||
{
|
||||
obs(i) = ang_vel_scale * low_state->imu_state().gyroscope()[i];
|
||||
obs(i + 3) = -R(2, i);
|
||||
}
|
||||
|
||||
if (obs(5) > 0)
|
||||
{
|
||||
break;
|
||||
}
|
||||
|
||||
obs(6) = joy.ly * max_cmd[0] * cmd_scale[0];
|
||||
obs(7) = joy.lx * -1 * max_cmd[1] * cmd_scale[1];
|
||||
obs(8) = joy.rx * -1 * max_cmd[2] * cmd_scale[2];
|
||||
|
||||
for (int i = 0; i < 12; i++)
|
||||
{
|
||||
obs(9 + i) = (low_state->motor_state()[i].q() - default_angles[i]) * dof_pos_scale;
|
||||
obs(21 + i) = low_state->motor_state()[i].dq() * dof_vel_scale;
|
||||
}
|
||||
obs.segment(33, 12) = act;
|
||||
|
||||
float phase = std::fmod(time / period, 1);
|
||||
time += .02;
|
||||
obs(45) = std::sin(2 * M_PI * phase);
|
||||
obs(46) = std::cos(2 * M_PI * phase);
|
||||
|
||||
// policy forward
|
||||
torch::Tensor torch_tensor = torch::from_blob(obs.data(), {1, obs.size()}, torch::kFloat).clone();
|
||||
std::vector<torch::jit::IValue> inputs;
|
||||
inputs.push_back(torch_tensor);
|
||||
torch::Tensor output_tensor = module.forward(inputs).toTensor();
|
||||
std::memcpy(act.data(), output_tensor.data_ptr<float>(), output_tensor.size(1) * sizeof(float));
|
||||
|
||||
auto low_cmd = std::make_shared<unitree_hg::msg::dds_::LowCmd_>();
|
||||
// leg
|
||||
for (int i = 0; i < 12; i++)
|
||||
{
|
||||
low_cmd->motor_cmd()[i].q() = act(i) * action_scale + default_angles[i];
|
||||
low_cmd->motor_cmd()[i].kp() = kps[i];
|
||||
low_cmd->motor_cmd()[i].kd() = kds[i];
|
||||
low_cmd->motor_cmd()[i].dq() = 0;
|
||||
low_cmd->motor_cmd()[i].tau() = 0;
|
||||
}
|
||||
|
||||
// waist arm
|
||||
for (int i = 12; i < 29; i++)
|
||||
{
|
||||
low_cmd->motor_cmd()[i].q() = arm_waist_target[i - 12];
|
||||
low_cmd->motor_cmd()[i].kp() = arm_waist_kps[i - 12];
|
||||
low_cmd->motor_cmd()[i].kd() = arm_waist_kds[i - 12];
|
||||
low_cmd->motor_cmd()[i].dq() = 0;
|
||||
low_cmd->motor_cmd()[i].tau() = 0;
|
||||
}
|
||||
|
||||
mLowCmdBuf.SetDataPtr(low_cmd);
|
||||
|
||||
next_cycle += cycle_time;
|
||||
std::this_thread::sleep_until(next_cycle);
|
||||
}
|
||||
}
|
||||
|
||||
void Controller::damp()
|
||||
{
|
||||
std::cout << "damping\n";
|
||||
const std::chrono::milliseconds cycle_time(20);
|
||||
auto next_cycle = std::chrono::steady_clock::now();
|
||||
|
||||
while (true)
|
||||
{
|
||||
auto low_cmd = std::make_shared<unitree_hg::msg::dds_::LowCmd_>();
|
||||
for (auto &cmd : low_cmd->motor_cmd())
|
||||
{
|
||||
cmd.kp() = 0;
|
||||
cmd.kd() = 8;
|
||||
cmd.dq() = 0;
|
||||
cmd.tau() = 0;
|
||||
}
|
||||
|
||||
mLowCmdBuf.SetDataPtr(low_cmd);
|
||||
|
||||
next_cycle += cycle_time;
|
||||
std::this_thread::sleep_until(next_cycle);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void Controller::low_state_message_handler(const void *message)
|
||||
{
|
||||
unitree_hg::msg::dds_::LowState_* ptr = (unitree_hg::msg::dds_::LowState_*)message;
|
||||
mLowStateBuf.SetData(*ptr);
|
||||
std::memcpy(&joy, ptr->wireless_remote().data(), ptr->wireless_remote().size() * sizeof(uint8_t));
|
||||
}
|
||||
|
||||
void Controller::low_cmd_write_handler()
|
||||
{
|
||||
if (auto lowCmdPtr = mLowCmdBuf.GetDataPtr())
|
||||
{
|
||||
lowCmdPtr->mode_machine() = mLowStateBuf.GetDataPtr()->mode_machine();
|
||||
lowCmdPtr->mode_pr() = 0;
|
||||
for (auto &cmd : lowCmdPtr->motor_cmd())
|
||||
{
|
||||
cmd.mode() = 1;
|
||||
}
|
||||
lowCmdPtr->crc() = crc32_core((uint32_t*)(lowCmdPtr.get()), (sizeof(unitree_hg::msg::dds_::LowCmd_) >> 2) - 1);
|
||||
lowcmd_publisher->Write(*lowCmdPtr);
|
||||
}
|
||||
}
|
||||
66
deploy/deploy_real/cpp_g1/Controller.h
Normal file
66
deploy/deploy_real/cpp_g1/Controller.h
Normal file
@@ -0,0 +1,66 @@
|
||||
#ifndef CONTROLLER_H
|
||||
#define CONTROLLER_H
|
||||
|
||||
#include <unitree/idl/hg/LowCmd_.hpp>
|
||||
#include <unitree/idl/hg/LowState_.hpp>
|
||||
#include <unitree/robot/channel/channel_subscriber.hpp>
|
||||
#include <unitree/robot/channel/channel_publisher.hpp>
|
||||
#include <unitree/common/time/time_tool.hpp>
|
||||
|
||||
#include "torch/script.h"
|
||||
|
||||
#include <eigen3/Eigen/Eigen>
|
||||
|
||||
#include "joystick.h"
|
||||
#include "DataBuffer.h"
|
||||
#include <string>
|
||||
|
||||
class Controller
|
||||
{
|
||||
public:
|
||||
Controller(const std::string &net_interface);
|
||||
void low_state_message_handler(const void *message);
|
||||
void move_to_default_pos();
|
||||
void run();
|
||||
void damp();
|
||||
void zero_torque_state();
|
||||
|
||||
private:
|
||||
void low_cmd_write_handler();
|
||||
|
||||
unitree::common::ThreadPtr low_cmd_write_thread_ptr;
|
||||
|
||||
DataBuffer<unitree_hg::msg::dds_::LowCmd_> mLowCmdBuf;
|
||||
DataBuffer<unitree_hg::msg::dds_::LowState_> mLowStateBuf;
|
||||
|
||||
unitree::robot::ChannelPublisherPtr<unitree_hg::msg::dds_::LowCmd_> lowcmd_publisher;
|
||||
unitree::robot::ChannelSubscriberPtr<unitree_hg::msg::dds_::LowState_> lowstate_subscriber;
|
||||
|
||||
// joystick
|
||||
xRockerBtnDataStruct joy;
|
||||
|
||||
// yaml config
|
||||
std::vector<float> leg_joint2motor_idx;
|
||||
std::vector<float> kps;
|
||||
std::vector<float> kds;
|
||||
std::vector<float> default_angles;
|
||||
std::vector<float> arm_waist_joint2motor_idx;
|
||||
std::vector<float> arm_waist_kps;
|
||||
std::vector<float> arm_waist_kds;
|
||||
std::vector<float> arm_waist_target;
|
||||
float ang_vel_scale;
|
||||
float dof_pos_scale;
|
||||
float dof_vel_scale;
|
||||
float action_scale;
|
||||
std::vector<float> cmd_scale;
|
||||
float num_actions;
|
||||
float num_obs;
|
||||
std::vector<float> max_cmd;
|
||||
|
||||
Eigen::VectorXf obs;
|
||||
Eigen::VectorXf act;
|
||||
|
||||
torch::jit::script::Module module;
|
||||
};
|
||||
|
||||
#endif
|
||||
77
deploy/deploy_real/cpp_g1/DataBuffer.h
Normal file
77
deploy/deploy_real/cpp_g1/DataBuffer.h
Normal file
@@ -0,0 +1,77 @@
|
||||
#ifndef DATA_BUFFER_H
|
||||
#define DATA_BUFFER_H
|
||||
|
||||
#include <memory>
|
||||
#include "AtomicLock.h"
|
||||
|
||||
template<typename T>
|
||||
class DataBuffer
|
||||
{
|
||||
public:
|
||||
explicit DataBuffer() = default;
|
||||
~DataBuffer() = default;
|
||||
|
||||
void SetDataPtr(const std::shared_ptr<T>& dataPtr)
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
mDataPtr = dataPtr;
|
||||
}
|
||||
|
||||
std::shared_ptr<T> GetDataPtr(bool clear = false)
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
if (clear)
|
||||
{
|
||||
std::shared_ptr<T> dataPtr = mDataPtr;
|
||||
mDataPtr.reset();
|
||||
return dataPtr;
|
||||
}
|
||||
else
|
||||
{
|
||||
return mDataPtr;
|
||||
}
|
||||
}
|
||||
|
||||
void SwapDataPtr(std::shared_ptr<T>& dataPtr)
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
mDataPtr.swap(dataPtr);
|
||||
}
|
||||
|
||||
void SetData(const T& data)
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
mDataPtr = std::shared_ptr<T>(new T(data));
|
||||
}
|
||||
|
||||
/*
|
||||
* Not recommend because an additional data assignment.
|
||||
*/
|
||||
bool GetData(T& data, bool clear = false)
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
if (mDataPtr == NULL)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
data = *mDataPtr;
|
||||
if (clear)
|
||||
{
|
||||
mDataPtr.reset();
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void Clear()
|
||||
{
|
||||
ScopedLock<AFLock> lock(mLock);
|
||||
mDataPtr.reset();
|
||||
}
|
||||
|
||||
private:
|
||||
std::shared_ptr<T> mDataPtr;
|
||||
AFLock mLock;
|
||||
};
|
||||
#endif
|
||||
41
deploy/deploy_real/cpp_g1/joystick.h
Normal file
41
deploy/deploy_real/cpp_g1/joystick.h
Normal file
@@ -0,0 +1,41 @@
|
||||
#ifndef JOYSTICK_H
|
||||
#define JOYSTICK_H
|
||||
|
||||
typedef union
|
||||
{
|
||||
struct
|
||||
{
|
||||
uint8_t R1 : 1;
|
||||
uint8_t L1 : 1;
|
||||
uint8_t start : 1;
|
||||
uint8_t select : 1;
|
||||
uint8_t R2 : 1;
|
||||
uint8_t L2 : 1;
|
||||
uint8_t F1 : 1;
|
||||
uint8_t F2 : 1;
|
||||
uint8_t A : 1;
|
||||
uint8_t B : 1;
|
||||
uint8_t X : 1;
|
||||
uint8_t Y : 1;
|
||||
uint8_t up : 1;
|
||||
uint8_t right : 1;
|
||||
uint8_t down : 1;
|
||||
uint8_t left : 1;
|
||||
} components;
|
||||
uint16_t value;
|
||||
} xKeySwitchUnion;
|
||||
|
||||
typedef struct
|
||||
{
|
||||
uint8_t head[2];
|
||||
xKeySwitchUnion btn;
|
||||
float lx;
|
||||
float rx;
|
||||
float ry;
|
||||
float L2;
|
||||
float ly;
|
||||
|
||||
uint8_t idle[16];
|
||||
} xRockerBtnDataStruct;
|
||||
|
||||
#endif
|
||||
17
deploy/deploy_real/cpp_g1/main.cpp
Normal file
17
deploy/deploy_real/cpp_g1/main.cpp
Normal file
@@ -0,0 +1,17 @@
|
||||
#include <iostream>
|
||||
#include "Controller.h"
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
if (argc != 2)
|
||||
{
|
||||
std::cout << "g1_deploy_real [net_interface]" << std::endl;
|
||||
exit(1);
|
||||
}
|
||||
Controller controller(argv[1]);
|
||||
controller.zero_torque_state();
|
||||
controller.move_to_default_pos();
|
||||
controller.run();
|
||||
controller.damp();
|
||||
return 0;
|
||||
}
|
||||
32
deploy/deploy_real/cpp_g1/utilities.cpp
Normal file
32
deploy/deploy_real/cpp_g1/utilities.cpp
Normal file
@@ -0,0 +1,32 @@
|
||||
#include "utilities.h"
|
||||
|
||||
unsigned int crc32_core(unsigned int* ptr, unsigned int len)
|
||||
{
|
||||
unsigned int xbit = 0;
|
||||
unsigned int data = 0;
|
||||
unsigned int CRC32 = 0xFFFFFFFF;
|
||||
const unsigned int dwPolynomial = 0x04c11db7;
|
||||
for (unsigned int i = 0; i < len; i++)
|
||||
{
|
||||
xbit = 1 << 31;
|
||||
data = ptr[i];
|
||||
for (unsigned int bits = 0; bits < 32; bits++)
|
||||
{
|
||||
if (CRC32 & 0x80000000)
|
||||
{
|
||||
CRC32 <<= 1;
|
||||
CRC32 ^= dwPolynomial;
|
||||
}
|
||||
else
|
||||
{
|
||||
CRC32 <<= 1;
|
||||
}
|
||||
if (data & xbit)
|
||||
{
|
||||
CRC32 ^= dwPolynomial;
|
||||
}
|
||||
xbit >>= 1;
|
||||
}
|
||||
}
|
||||
return CRC32;
|
||||
}
|
||||
6
deploy/deploy_real/cpp_g1/utilities.h
Normal file
6
deploy/deploy_real/cpp_g1/utilities.h
Normal file
@@ -0,0 +1,6 @@
|
||||
#ifndef UTILITIES_H
|
||||
#define UTILITIES_H
|
||||
|
||||
unsigned int crc32_core(unsigned int* ptr, unsigned int len);
|
||||
|
||||
#endif
|
||||
265
deploy/deploy_real/deploy_real.py
Normal file
265
deploy/deploy_real/deploy_real.py
Normal file
@@ -0,0 +1,265 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
from typing import Union
|
||||
import numpy as np
|
||||
import time
|
||||
import torch
|
||||
|
||||
from unitree_sdk2py.core.channel import ChannelPublisher, ChannelFactoryInitialize
|
||||
from unitree_sdk2py.core.channel import ChannelSubscriber, ChannelFactoryInitialize
|
||||
from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_, unitree_hg_msg_dds__LowState_
|
||||
from unitree_sdk2py.idl.default import unitree_go_msg_dds__LowCmd_, unitree_go_msg_dds__LowState_
|
||||
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowCmd_ as LowCmdHG
|
||||
from unitree_sdk2py.idl.unitree_go.msg.dds_ import LowCmd_ as LowCmdGo
|
||||
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowState_ as LowStateHG
|
||||
from unitree_sdk2py.idl.unitree_go.msg.dds_ import LowState_ as LowStateGo
|
||||
from unitree_sdk2py.utils.crc import CRC
|
||||
|
||||
from common.command_helper import create_damping_cmd, create_zero_cmd, init_cmd_hg, init_cmd_go, MotorMode
|
||||
from common.rotation_helper import get_gravity_orientation, transform_imu_data
|
||||
from common.remote_controller import RemoteController, KeyMap
|
||||
from config import Config
|
||||
|
||||
|
||||
class Controller:
|
||||
def __init__(self, config: Config) -> None:
|
||||
self.config = config
|
||||
self.remote_controller = RemoteController()
|
||||
|
||||
# Initialize the policy network
|
||||
self.policy = torch.jit.load(config.policy_path)
|
||||
# Initializing process variables
|
||||
self.qj = np.zeros(config.num_actions, dtype=np.float32)
|
||||
self.dqj = np.zeros(config.num_actions, dtype=np.float32)
|
||||
self.action = np.zeros(config.num_actions, dtype=np.float32)
|
||||
self.target_dof_pos = config.default_angles.copy()
|
||||
self.obs = np.zeros(config.num_obs, dtype=np.float32)
|
||||
self.cmd = np.array([0.0, 0, 0])
|
||||
self.counter = 0
|
||||
|
||||
if config.msg_type == "hg":
|
||||
# g1 and h1_2 use the hg msg type
|
||||
self.low_cmd = unitree_hg_msg_dds__LowCmd_()
|
||||
self.low_state = unitree_hg_msg_dds__LowState_()
|
||||
self.mode_pr_ = MotorMode.PR
|
||||
self.mode_machine_ = 0
|
||||
|
||||
self.lowcmd_publisher_ = ChannelPublisher(config.lowcmd_topic, LowCmdHG)
|
||||
self.lowcmd_publisher_.Init()
|
||||
|
||||
self.lowstate_subscriber = ChannelSubscriber(config.lowstate_topic, LowStateHG)
|
||||
self.lowstate_subscriber.Init(self.LowStateHgHandler, 10)
|
||||
|
||||
elif config.msg_type == "go":
|
||||
# h1 uses the go msg type
|
||||
self.low_cmd = unitree_go_msg_dds__LowCmd_()
|
||||
self.low_state = unitree_go_msg_dds__LowState_()
|
||||
|
||||
self.lowcmd_publisher_ = ChannelPublisher(config.lowcmd_topic, LowCmdGo)
|
||||
self.lowcmd_publisher_.Init()
|
||||
|
||||
self.lowstate_subscriber = ChannelSubscriber(config.lowstate_topic, LowStateGo)
|
||||
self.lowstate_subscriber.Init(self.LowStateGoHandler, 10)
|
||||
|
||||
else:
|
||||
raise ValueError("Invalid msg_type")
|
||||
|
||||
# wait for the subscriber to receive data
|
||||
self.wait_for_low_state()
|
||||
|
||||
# Initialize the command msg
|
||||
if config.msg_type == "hg":
|
||||
init_cmd_hg(self.low_cmd, self.mode_machine_, self.mode_pr_)
|
||||
elif config.msg_type == "go":
|
||||
init_cmd_go(self.low_cmd, weak_motor=self.config.weak_motor)
|
||||
|
||||
def LowStateHgHandler(self, msg: LowStateHG):
|
||||
self.low_state = msg
|
||||
self.mode_machine_ = self.low_state.mode_machine
|
||||
self.remote_controller.set(self.low_state.wireless_remote)
|
||||
|
||||
def LowStateGoHandler(self, msg: LowStateGo):
|
||||
self.low_state = msg
|
||||
self.remote_controller.set(self.low_state.wireless_remote)
|
||||
|
||||
def send_cmd(self, cmd: Union[LowCmdGo, LowCmdHG]):
|
||||
cmd.crc = CRC().Crc(cmd)
|
||||
self.lowcmd_publisher_.Write(cmd)
|
||||
|
||||
def wait_for_low_state(self):
|
||||
while self.low_state.tick == 0:
|
||||
time.sleep(self.config.control_dt)
|
||||
print("Successfully connected to the robot.")
|
||||
|
||||
def zero_torque_state(self):
|
||||
print("Enter zero torque state.")
|
||||
print("Waiting for the start signal...")
|
||||
while self.remote_controller.button[KeyMap.start] != 1:
|
||||
create_zero_cmd(self.low_cmd)
|
||||
self.send_cmd(self.low_cmd)
|
||||
time.sleep(self.config.control_dt)
|
||||
|
||||
def move_to_default_pos(self):
|
||||
print("Moving to default pos.")
|
||||
# move time 2s
|
||||
total_time = 2
|
||||
num_step = int(total_time / self.config.control_dt)
|
||||
|
||||
dof_idx = self.config.leg_joint2motor_idx + self.config.arm_waist_joint2motor_idx
|
||||
kps = self.config.kps + self.config.arm_waist_kps
|
||||
kds = self.config.kds + self.config.arm_waist_kds
|
||||
default_pos = np.concatenate((self.config.default_angles, self.config.arm_waist_target), axis=0)
|
||||
dof_size = len(dof_idx)
|
||||
|
||||
# record the current pos
|
||||
init_dof_pos = np.zeros(dof_size, dtype=np.float32)
|
||||
for i in range(dof_size):
|
||||
init_dof_pos[i] = self.low_state.motor_state[dof_idx[i]].q
|
||||
|
||||
# move to default pos
|
||||
for i in range(num_step):
|
||||
alpha = i / num_step
|
||||
for j in range(dof_size):
|
||||
motor_idx = dof_idx[j]
|
||||
target_pos = default_pos[j]
|
||||
self.low_cmd.motor_cmd[motor_idx].q = init_dof_pos[j] * (1 - alpha) + target_pos * alpha
|
||||
self.low_cmd.motor_cmd[motor_idx].qd = 0
|
||||
self.low_cmd.motor_cmd[motor_idx].kp = kps[j]
|
||||
self.low_cmd.motor_cmd[motor_idx].kd = kds[j]
|
||||
self.low_cmd.motor_cmd[motor_idx].tau = 0
|
||||
self.send_cmd(self.low_cmd)
|
||||
time.sleep(self.config.control_dt)
|
||||
|
||||
def default_pos_state(self):
|
||||
print("Enter default pos state.")
|
||||
print("Waiting for the Button A signal...")
|
||||
while self.remote_controller.button[KeyMap.A] != 1:
|
||||
for i in range(len(self.config.leg_joint2motor_idx)):
|
||||
motor_idx = self.config.leg_joint2motor_idx[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].q = self.config.default_angles[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].qd = 0
|
||||
self.low_cmd.motor_cmd[motor_idx].kp = self.config.kps[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].kd = self.config.kds[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].tau = 0
|
||||
for i in range(len(self.config.arm_waist_joint2motor_idx)):
|
||||
motor_idx = self.config.arm_waist_joint2motor_idx[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].q = self.config.arm_waist_target[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].qd = 0
|
||||
self.low_cmd.motor_cmd[motor_idx].kp = self.config.arm_waist_kps[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].kd = self.config.arm_waist_kds[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].tau = 0
|
||||
self.send_cmd(self.low_cmd)
|
||||
time.sleep(self.config.control_dt)
|
||||
|
||||
def run(self):
|
||||
self.counter += 1
|
||||
# Get the current joint position and velocity
|
||||
for i in range(len(self.config.leg_joint2motor_idx)):
|
||||
self.qj[i] = self.low_state.motor_state[self.config.leg_joint2motor_idx[i]].q
|
||||
self.dqj[i] = self.low_state.motor_state[self.config.leg_joint2motor_idx[i]].dq
|
||||
|
||||
# imu_state quaternion: w, x, y, z
|
||||
quat = self.low_state.imu_state.quaternion
|
||||
ang_vel = np.array([self.low_state.imu_state.gyroscope], dtype=np.float32)
|
||||
|
||||
if self.config.imu_type == "torso":
|
||||
# h1 and h1_2 imu is on the torso
|
||||
# imu data needs to be transformed to the pelvis frame
|
||||
waist_yaw = self.low_state.motor_state[self.config.arm_waist_joint2motor_idx[0]].q
|
||||
waist_yaw_omega = self.low_state.motor_state[self.config.arm_waist_joint2motor_idx[0]].dq
|
||||
quat, ang_vel = transform_imu_data(waist_yaw=waist_yaw, waist_yaw_omega=waist_yaw_omega, imu_quat=quat, imu_omega=ang_vel)
|
||||
|
||||
# create observation
|
||||
gravity_orientation = get_gravity_orientation(quat)
|
||||
qj_obs = self.qj.copy()
|
||||
dqj_obs = self.dqj.copy()
|
||||
qj_obs = (qj_obs - self.config.default_angles) * self.config.dof_pos_scale
|
||||
dqj_obs = dqj_obs * self.config.dof_vel_scale
|
||||
ang_vel = ang_vel * self.config.ang_vel_scale
|
||||
period = 0.8
|
||||
count = self.counter * self.config.control_dt
|
||||
phase = count % period / period
|
||||
sin_phase = np.sin(2 * np.pi * phase)
|
||||
cos_phase = np.cos(2 * np.pi * phase)
|
||||
|
||||
self.cmd[0] = self.remote_controller.ly
|
||||
self.cmd[1] = self.remote_controller.lx * -1
|
||||
self.cmd[2] = self.remote_controller.rx * -1
|
||||
|
||||
num_actions = self.config.num_actions
|
||||
self.obs[:3] = ang_vel
|
||||
self.obs[3:6] = gravity_orientation
|
||||
self.obs[6:9] = self.cmd * self.config.cmd_scale * self.config.max_cmd
|
||||
self.obs[9 : 9 + num_actions] = qj_obs
|
||||
self.obs[9 + num_actions : 9 + num_actions * 2] = dqj_obs
|
||||
self.obs[9 + num_actions * 2 : 9 + num_actions * 3] = self.action
|
||||
self.obs[9 + num_actions * 3] = sin_phase
|
||||
self.obs[9 + num_actions * 3 + 1] = cos_phase
|
||||
|
||||
# Get the action from the policy network
|
||||
obs_tensor = torch.from_numpy(self.obs).unsqueeze(0)
|
||||
self.action = self.policy(obs_tensor).detach().numpy().squeeze()
|
||||
|
||||
# transform action to target_dof_pos
|
||||
target_dof_pos = self.config.default_angles + self.action * self.config.action_scale
|
||||
|
||||
# Build low cmd
|
||||
for i in range(len(self.config.leg_joint2motor_idx)):
|
||||
motor_idx = self.config.leg_joint2motor_idx[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].q = target_dof_pos[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].qd = 0
|
||||
self.low_cmd.motor_cmd[motor_idx].kp = self.config.kps[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].kd = self.config.kds[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].tau = 0
|
||||
|
||||
for i in range(len(self.config.arm_waist_joint2motor_idx)):
|
||||
motor_idx = self.config.arm_waist_joint2motor_idx[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].q = self.config.arm_waist_target[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].qd = 0
|
||||
self.low_cmd.motor_cmd[motor_idx].kp = self.config.arm_waist_kps[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].kd = self.config.arm_waist_kds[i]
|
||||
self.low_cmd.motor_cmd[motor_idx].tau = 0
|
||||
|
||||
# send the command
|
||||
self.send_cmd(self.low_cmd)
|
||||
|
||||
time.sleep(self.config.control_dt)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("net", type=str, help="network interface")
|
||||
parser.add_argument("config", type=str, help="config file name in the configs folder", default="g1.yaml")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Load config
|
||||
config_path = f"{LEGGED_GYM_ROOT_DIR}/deploy/deploy_real/configs/{args.config}"
|
||||
config = Config(config_path)
|
||||
|
||||
# Initialize DDS communication
|
||||
ChannelFactoryInitialize(0, args.net)
|
||||
|
||||
controller = Controller(config)
|
||||
|
||||
# Enter the zero torque state, press the start key to continue executing
|
||||
controller.zero_torque_state()
|
||||
|
||||
# Move to the default position
|
||||
controller.move_to_default_pos()
|
||||
|
||||
# Enter the default position state, press the A key to continue executing
|
||||
controller.default_pos_state()
|
||||
|
||||
while True:
|
||||
try:
|
||||
controller.run()
|
||||
# Press the select key to exit
|
||||
if controller.remote_controller.button[KeyMap.select] == 1:
|
||||
break
|
||||
except KeyboardInterrupt:
|
||||
break
|
||||
# Enter the damping state
|
||||
create_damping_cmd(controller.low_cmd)
|
||||
controller.send_cmd(controller.low_cmd)
|
||||
print("Exit")
|
||||
BIN
deploy/pre_train/g1/motion.pt
Normal file
BIN
deploy/pre_train/g1/motion.pt
Normal file
Binary file not shown.
BIN
deploy/pre_train/h1/motion.pt
Normal file
BIN
deploy/pre_train/h1/motion.pt
Normal file
Binary file not shown.
BIN
deploy/pre_train/h1_2/motion.pt
Normal file
BIN
deploy/pre_train/h1_2/motion.pt
Normal file
Binary file not shown.
BIN
deploy/pre_train/smallgray/policy_lstm_1.pt
Normal file
BIN
deploy/pre_train/smallgray/policy_lstm_1.pt
Normal file
Binary file not shown.
159
doc/setup_en.md
Normal file
159
doc/setup_en.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# Installation Guide
|
||||
|
||||
## System Requirements
|
||||
|
||||
- **Operating System**: Recommended Ubuntu 18.04 or later
|
||||
- **GPU**: Nvidia GPU
|
||||
- **Driver Version**: Recommended version 525 or later
|
||||
|
||||
---
|
||||
|
||||
## 1. Creating a Virtual Environment
|
||||
|
||||
It is recommended to run training or deployment programs in a virtual environment. Conda is recommended for creating virtual environments. If Conda is already installed on your system, you can skip step 1.1.
|
||||
|
||||
### 1.1 Download and Install MiniConda
|
||||
|
||||
MiniConda is a lightweight distribution of Conda, suitable for creating and managing virtual environments. Use the following commands to download and install:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/miniconda3
|
||||
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
|
||||
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
|
||||
rm ~/miniconda3/miniconda.sh
|
||||
```
|
||||
|
||||
After installation, initialize Conda:
|
||||
|
||||
```bash
|
||||
~/miniconda3/bin/conda init --all
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
### 1.2 Create a New Environment
|
||||
|
||||
Use the following command to create a virtual environment:
|
||||
|
||||
```bash
|
||||
conda create -n unitree-rl python=3.8
|
||||
```
|
||||
|
||||
### 1.3 Activate the Virtual Environment
|
||||
|
||||
```bash
|
||||
conda activate unitree-rl
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Installing Dependencies
|
||||
|
||||
### 2.1 Install PyTorch
|
||||
|
||||
PyTorch is a neural network computation framework used for model training and inference. Install it using the following command:
|
||||
|
||||
```bash
|
||||
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia
|
||||
```
|
||||
|
||||
### 2.2 Install Isaac Gym
|
||||
|
||||
Isaac Gym is a rigid body simulation and training framework provided by Nvidia.
|
||||
|
||||
#### 2.2.1 Download
|
||||
|
||||
Download [Isaac Gym](https://developer.nvidia.com/isaac-gym) from Nvidia’s official website.
|
||||
|
||||
#### 2.2.2 Install
|
||||
|
||||
After extracting the package, navigate to the `isaacgym/python` folder and install it using the following commands:
|
||||
|
||||
```bash
|
||||
cd isaacgym/python
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
#### 2.2.3 Verify Installation
|
||||
|
||||
Run the following command. If a window opens displaying 1080 balls falling, the installation was successful:
|
||||
|
||||
```bash
|
||||
cd examples
|
||||
python 1080_balls_of_solitude.py
|
||||
```
|
||||
|
||||
If you encounter any issues, refer to the official documentation at `isaacgym/docs/index.html`.
|
||||
|
||||
### 2.3 Install rsl_rl
|
||||
|
||||
`rsl_rl` is a library implementing reinforcement learning algorithms.
|
||||
|
||||
#### 2.3.1 Download
|
||||
|
||||
Clone the repository using Git:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/leggedrobotics/rsl_rl.git
|
||||
```
|
||||
|
||||
#### 2.3.2 Switch Branch
|
||||
|
||||
Switch to the v1.0.2 branch:
|
||||
|
||||
```bash
|
||||
cd rsl_rl
|
||||
git checkout v1.0.2
|
||||
```
|
||||
|
||||
#### 2.3.3 Install
|
||||
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### 2.4 Install unitree_rl_gym
|
||||
|
||||
#### 2.4.1 Download
|
||||
|
||||
Clone the repository using Git:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unitreerobotics/unitree_rl_gym.git
|
||||
```
|
||||
|
||||
#### 2.4.2 Install
|
||||
|
||||
Navigate to the directory and install it:
|
||||
|
||||
```bash
|
||||
cd unitree_rl_gym
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### 2.5 Install unitree_sdk2py (Optional)
|
||||
|
||||
`unitree_sdk2py` is a library used for communication with real robots. If you need to deploy the trained model on a physical robot, install this library.
|
||||
|
||||
#### 2.5.1 Download
|
||||
|
||||
Clone the repository using Git:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
|
||||
```
|
||||
|
||||
#### 2.5.2 Install
|
||||
|
||||
Navigate to the directory and install it:
|
||||
|
||||
```bash
|
||||
cd unitree_sdk2_python
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
After completing the above steps, you are ready to run the related programs in the virtual environment. If you encounter any issues, refer to the official documentation of each component or check if the dependencies are installed correctly.
|
||||
|
||||
159
doc/setup_zh.md
Normal file
159
doc/setup_zh.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# 安装配置文档
|
||||
|
||||
## 系统要求
|
||||
|
||||
- **操作系统**:推荐使用 Ubuntu 18.04 或更高版本
|
||||
- **显卡**:Nvidia 显卡
|
||||
- **驱动版本**:建议使用 525 或更高版本
|
||||
|
||||
---
|
||||
|
||||
## 1. 创建虚拟环境
|
||||
|
||||
建议在虚拟环境中运行训练或部署程序,推荐使用 Conda 创建虚拟环境。如果您的系统中已经安装了 Conda,可以跳过步骤 1.1。
|
||||
|
||||
### 1.1 下载并安装 MiniConda
|
||||
|
||||
MiniConda 是 Conda 的轻量级发行版,适用于创建和管理虚拟环境。使用以下命令下载并安装:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/miniconda3
|
||||
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
|
||||
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
|
||||
rm ~/miniconda3/miniconda.sh
|
||||
```
|
||||
|
||||
安装完成后,初始化 Conda:
|
||||
|
||||
```bash
|
||||
~/miniconda3/bin/conda init --all
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
### 1.2 创建新环境
|
||||
|
||||
使用以下命令创建虚拟环境:
|
||||
|
||||
```bash
|
||||
conda create -n unitree-rl python=3.8
|
||||
```
|
||||
|
||||
### 1.3 激活虚拟环境
|
||||
|
||||
```bash
|
||||
conda activate unitree-rl
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. 安装依赖
|
||||
|
||||
### 2.1 安装 PyTorch
|
||||
|
||||
PyTorch 是一个神经网络计算框架,用于模型训练和推理。使用以下命令安装:
|
||||
|
||||
```bash
|
||||
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia
|
||||
```
|
||||
|
||||
### 2.2 安装 Isaac Gym
|
||||
|
||||
Isaac Gym 是 Nvidia 提供的刚体仿真和训练框架。
|
||||
|
||||
#### 2.2.1 下载
|
||||
|
||||
从 Nvidia 官网下载 [Isaac Gym](https://developer.nvidia.com/isaac-gym)。
|
||||
|
||||
#### 2.2.2 安装
|
||||
|
||||
解压后进入 `isaacgym/python` 文件夹,执行以下命令安装:
|
||||
|
||||
```bash
|
||||
cd isaacgym/python
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
#### 2.2.3 验证安装
|
||||
|
||||
运行以下命令,若弹出窗口并显示 1080 个球下落,则安装成功:
|
||||
|
||||
```bash
|
||||
cd examples
|
||||
python 1080_balls_of_solitude.py
|
||||
```
|
||||
|
||||
如有问题,可参考 `isaacgym/docs/index.html` 中的官方文档。
|
||||
|
||||
### 2.3 安装 rsl_rl
|
||||
|
||||
`rsl_rl` 是一个强化学习算法库。
|
||||
|
||||
#### 2.3.1 下载
|
||||
|
||||
通过 Git 克隆仓库:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/leggedrobotics/rsl_rl.git
|
||||
```
|
||||
|
||||
#### 2.3.2 切换分支
|
||||
|
||||
切换到 v1.0.2 分支:
|
||||
|
||||
```bash
|
||||
cd rsl_rl
|
||||
git checkout v1.0.2
|
||||
```
|
||||
|
||||
#### 2.3.3 安装
|
||||
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### 2.4 安装 unitree_rl_gym
|
||||
|
||||
#### 2.4.1 下载
|
||||
|
||||
通过 Git 克隆仓库:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unitreerobotics/unitree_rl_gym.git
|
||||
```
|
||||
|
||||
#### 2.4.2 安装
|
||||
|
||||
进入目录并安装:
|
||||
|
||||
```bash
|
||||
cd unitree_rl_gym
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### 2.5 安装 unitree_sdk2py(可选)
|
||||
|
||||
`unitree_sdk2py` 是用于与真实机器人通信的库。如果需要将训练的模型部署到物理机器人上运行,可以安装此库。
|
||||
|
||||
#### 2.5.1 下载
|
||||
|
||||
通过 Git 克隆仓库:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
|
||||
```
|
||||
|
||||
#### 2.5.2 安装
|
||||
|
||||
进入目录并安装:
|
||||
|
||||
```bash
|
||||
cd unitree_sdk2_python
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 总结
|
||||
|
||||
按照上述步骤完成后,您已经准备好在虚拟环境中运行相关程序。若遇到问题,请参考各组件的官方文档或检查依赖安装是否正确。
|
||||
|
||||
31
legged_gym/LICENSE
Normal file
31
legged_gym/LICENSE
Normal file
@@ -0,0 +1,31 @@
|
||||
Copyright (c) 2021, ETH Zurich, Nikita Rudin
|
||||
Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software without
|
||||
specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
See licenses/assets for license information for assets included in this repository.
|
||||
See licenses/dependencies for license information of dependencies of this package.
|
||||
4
legged_gym/__init__.py
Normal file
4
legged_gym/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
import os
|
||||
|
||||
LEGGED_GYM_ROOT_DIR = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
|
||||
LEGGED_GYM_ENVS_DIR = os.path.join(LEGGED_GYM_ROOT_DIR, 'legged_gym', 'envs')
|
||||
12
legged_gym/envs/__init__.py
Normal file
12
legged_gym/envs/__init__.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
|
||||
|
||||
from legged_gym.envs.fh.fh_config import FHRoughCfg, FHRoughCfgPPO
|
||||
from legged_gym.envs.fh.fh_env import FH_Env
|
||||
|
||||
from legged_gym.envs.proto.pro_config import PRORoughCfg, PRORoughCfgPPO
|
||||
from legged_gym.envs.proto.pro_env import PRO_Env
|
||||
|
||||
from legged_gym.utils.task_registry import task_registry
|
||||
|
||||
task_registry.register("fh",FH_Env, FHRoughCfg(), FHRoughCfgPPO())
|
||||
task_registry.register("proto",PRO_Env, PRORoughCfg(), PRORoughCfgPPO())
|
||||
25
legged_gym/envs/base/base_config.py
Normal file
25
legged_gym/envs/base/base_config.py
Normal file
@@ -0,0 +1,25 @@
|
||||
import inspect
|
||||
|
||||
class BaseConfig:
|
||||
def __init__(self) -> None:
|
||||
""" Initializes all member classes recursively. Ignores all namse starting with '__' (buit-in methods)."""
|
||||
self.init_member_classes(self)
|
||||
|
||||
@staticmethod
|
||||
def init_member_classes(obj):
|
||||
# iterate over all attributes names
|
||||
for key in dir(obj):
|
||||
# disregard builtin attributes
|
||||
# if key.startswith("__"):
|
||||
if key=="__class__":
|
||||
continue
|
||||
# get the corresponding attribute object
|
||||
var = getattr(obj, key)
|
||||
# check if it the attribute is a class
|
||||
if inspect.isclass(var):
|
||||
# instantate the class
|
||||
i_var = var()
|
||||
# set the attribute to the instance instead of the type
|
||||
setattr(obj, key, i_var)
|
||||
# recursively init members of the attribute
|
||||
BaseConfig.init_member_classes(i_var)
|
||||
115
legged_gym/envs/base/base_task.py
Normal file
115
legged_gym/envs/base/base_task.py
Normal file
@@ -0,0 +1,115 @@
|
||||
|
||||
import sys
|
||||
from isaacgym import gymapi
|
||||
from isaacgym import gymutil
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
# Base class for RL tasks
|
||||
class BaseTask():
|
||||
|
||||
def __init__(self, cfg, sim_params, physics_engine, sim_device, headless):
|
||||
self.gym = gymapi.acquire_gym()
|
||||
|
||||
self.sim_params = sim_params
|
||||
self.physics_engine = physics_engine
|
||||
self.sim_device = sim_device
|
||||
sim_device_type, self.sim_device_id = gymutil.parse_device_str(self.sim_device)
|
||||
self.headless = headless
|
||||
|
||||
# env device is GPU only if sim is on GPU and use_gpu_pipeline=True, otherwise returned tensors are copied to CPU by physX.
|
||||
if sim_device_type=='cuda' and sim_params.use_gpu_pipeline:
|
||||
self.device = self.sim_device
|
||||
else:
|
||||
self.device = 'cpu'
|
||||
|
||||
# graphics device for rendering, -1 for no rendering
|
||||
self.graphics_device_id = self.sim_device_id
|
||||
if self.headless == True:
|
||||
self.graphics_device_id = -1
|
||||
|
||||
self.num_envs = cfg.env.num_envs
|
||||
self.num_obs = cfg.env.num_observations
|
||||
self.num_privileged_obs = cfg.env.num_privileged_obs
|
||||
self.num_actions = cfg.env.num_actions
|
||||
|
||||
# optimization flags for pytorch JIT
|
||||
torch._C._jit_set_profiling_mode(False)
|
||||
torch._C._jit_set_profiling_executor(False)
|
||||
|
||||
# allocate buffers
|
||||
self.obs_buf = torch.zeros(self.num_envs, self.num_obs, device=self.device, dtype=torch.float)
|
||||
self.rew_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.float)
|
||||
self.reset_buf = torch.ones(self.num_envs, device=self.device, dtype=torch.long)
|
||||
self.episode_length_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.long)
|
||||
self.time_out_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.bool)
|
||||
if self.num_privileged_obs is not None:
|
||||
self.privileged_obs_buf = torch.zeros(self.num_envs, self.num_privileged_obs, device=self.device, dtype=torch.float)
|
||||
else:
|
||||
self.privileged_obs_buf = None
|
||||
# self.num_privileged_obs = self.num_obs
|
||||
|
||||
self.extras = {}
|
||||
|
||||
# create envs, sim and viewer
|
||||
self.create_sim()
|
||||
self.gym.prepare_sim(self.sim)
|
||||
|
||||
# todo: read from config
|
||||
self.enable_viewer_sync = True
|
||||
self.viewer = None
|
||||
|
||||
# if running with a viewer, set up keyboard shortcuts and camera
|
||||
if self.headless == False:
|
||||
# subscribe to keyboard shortcuts
|
||||
self.viewer = self.gym.create_viewer(
|
||||
self.sim, gymapi.CameraProperties())
|
||||
self.gym.subscribe_viewer_keyboard_event(
|
||||
self.viewer, gymapi.KEY_ESCAPE, "QUIT")
|
||||
self.gym.subscribe_viewer_keyboard_event(
|
||||
self.viewer, gymapi.KEY_V, "toggle_viewer_sync")
|
||||
|
||||
def get_observations(self):
|
||||
return self.obs_buf
|
||||
|
||||
def get_privileged_observations(self):
|
||||
return self.privileged_obs_buf
|
||||
|
||||
def reset_idx(self, env_ids):
|
||||
"""Reset selected robots"""
|
||||
raise NotImplementedError
|
||||
|
||||
def reset(self):
|
||||
""" Reset all robots"""
|
||||
self.reset_idx(torch.arange(self.num_envs, device=self.device))
|
||||
obs, privileged_obs, _, _, _ = self.step(torch.zeros(self.num_envs, self.num_actions, device=self.device, requires_grad=False))
|
||||
return obs, privileged_obs
|
||||
|
||||
def step(self, actions):
|
||||
raise NotImplementedError
|
||||
|
||||
def render(self, sync_frame_time=True):
|
||||
if self.viewer:
|
||||
# check for window closed
|
||||
if self.gym.query_viewer_has_closed(self.viewer):
|
||||
sys.exit()
|
||||
|
||||
# check for keyboard events
|
||||
for evt in self.gym.query_viewer_action_events(self.viewer):
|
||||
if evt.action == "QUIT" and evt.value > 0:
|
||||
sys.exit()
|
||||
elif evt.action == "toggle_viewer_sync" and evt.value > 0:
|
||||
self.enable_viewer_sync = not self.enable_viewer_sync
|
||||
|
||||
# fetch results
|
||||
if self.device != 'cpu':
|
||||
self.gym.fetch_results(self.sim, True)
|
||||
|
||||
# step graphics
|
||||
if self.enable_viewer_sync:
|
||||
self.gym.step_graphics(self.sim)
|
||||
self.gym.draw_viewer(self.viewer, self.sim, True)
|
||||
if sync_frame_time:
|
||||
self.gym.sync_frame_time(self.sim)
|
||||
else:
|
||||
self.gym.poll_viewer_events(self.viewer)
|
||||
893
legged_gym/envs/base/legged_robot.py
Normal file
893
legged_gym/envs/base/legged_robot.py
Normal file
@@ -0,0 +1,893 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, envs
|
||||
import time
|
||||
from warnings import WarningMessage
|
||||
import numpy as np
|
||||
import os
|
||||
|
||||
from isaacgym.torch_utils import *
|
||||
from isaacgym import gymtorch, gymapi, gymutil
|
||||
|
||||
import torch
|
||||
from torch import Tensor
|
||||
from typing import Tuple, Dict
|
||||
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
from legged_gym.envs.base.base_task import BaseTask
|
||||
from legged_gym.utils.math import quat_apply_yaw, wrap_to_pi, torch_rand_sqrt_float
|
||||
from legged_gym.utils.isaacgym_utils import get_euler_xyz as get_euler_xyz_in_tensor
|
||||
from legged_gym.utils.helpers import class_to_dict
|
||||
from .legged_robot_config import LeggedRobotCfg
|
||||
|
||||
class LeggedRobot(BaseTask):
|
||||
def __init__(self, cfg: LeggedRobotCfg, sim_params, physics_engine, sim_device, headless):
|
||||
""" Parses the provided config file,
|
||||
calls create_sim() (which creates, simulation and environments),
|
||||
initilizes pytorch buffers used during training
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
sim_params (gymapi.SimParams): simulation parameters
|
||||
physics_engine (gymapi.SimType): gymapi.SIM_PHYSX (must be PhysX)
|
||||
device_type (string): 'cuda' or 'cpu'
|
||||
device_id (int): 0, 1, ...
|
||||
headless (bool): Run without rendering if True
|
||||
"""
|
||||
self.cfg = cfg
|
||||
self.sim_params = sim_params
|
||||
self.height_samples = None
|
||||
self.debug_viz = False
|
||||
self.init_done = False
|
||||
self._parse_cfg(self.cfg)
|
||||
super().__init__(self.cfg, sim_params, physics_engine, sim_device, headless)
|
||||
|
||||
if not self.headless:
|
||||
self.set_camera(self.cfg.viewer.pos, self.cfg.viewer.lookat)
|
||||
self._init_buffers()
|
||||
self._prepare_reward_function()
|
||||
self.init_done = True
|
||||
|
||||
def step(self, actions):
|
||||
""" Apply actions, simulate, call self.post_physics_step()
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Tensor of shape (num_envs, num_actions_per_env)
|
||||
"""
|
||||
|
||||
clip_actions = self.cfg.normalization.clip_actions
|
||||
self.actions = torch.clip(actions, -clip_actions, clip_actions).to(self.device)
|
||||
# step physics and render each frame
|
||||
self.render()
|
||||
for _ in range(self.cfg.control.decimation):
|
||||
self.torques = self._compute_torques(self.actions).view(self.torques.shape)
|
||||
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(self.torques))
|
||||
self.gym.simulate(self.sim)
|
||||
if self.cfg.env.test:
|
||||
elapsed_time = self.gym.get_elapsed_time(self.sim)
|
||||
sim_time = self.gym.get_sim_time(self.sim)
|
||||
if sim_time-elapsed_time>0:
|
||||
time.sleep(sim_time-elapsed_time)
|
||||
|
||||
if self.device == 'cpu':
|
||||
self.gym.fetch_results(self.sim, True)
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.post_physics_step()
|
||||
|
||||
# return clipped obs, clipped states (None), rewards, dones and infos
|
||||
clip_obs = self.cfg.normalization.clip_observations
|
||||
self.obs_buf = torch.clip(self.obs_buf, -clip_obs, clip_obs)
|
||||
if self.privileged_obs_buf is not None:
|
||||
self.privileged_obs_buf = torch.clip(self.privileged_obs_buf, -clip_obs, clip_obs)
|
||||
return self.obs_buf, self.privileged_obs_buf, self.rew_buf, self.reset_buf, self.extras
|
||||
|
||||
def update_cmd_action_latency_buffer(self):
|
||||
actions_scaled = self.actions * self.cfg.control.action_scale
|
||||
if self.cfg.domain_rand.add_cmd_action_latency:
|
||||
self.cmd_action_latency_buffer[:, :, 1:] = self.cmd_action_latency_buffer[
|
||||
:, :, :self.cfg.domain_rand.range_cmd_action_latency[1]].clone()
|
||||
self.cmd_action_latency_buffer[:, :, 0] = actions_scaled.clone()
|
||||
action_delayed = self.cmd_action_latency_buffer[
|
||||
torch.arange(self.num_envs), :, self.cmd_action_latency_simstep.long()]
|
||||
else:
|
||||
action_delayed = actions_scaled
|
||||
|
||||
return action_delayed
|
||||
|
||||
def update_obs_latency_buffer(self):
|
||||
if self.cfg.domain_rand.randomize_obs_motor_latency:
|
||||
q = (self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos
|
||||
dq = self.dof_vel * self.obs_scales.dof_vel
|
||||
self.obs_motor_latency_buffer[:, :, 1:] = self.obs_motor_latency_buffer[
|
||||
:, :, :self.cfg.domain_rand.range_obs_motor_latency[1]].clone()
|
||||
self.obs_motor_latency_buffer[:, :, 0] = torch.cat((q, dq) , 1).clone()
|
||||
if self.cfg.domain_rand.randomize_obs_imu_latency:
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.base_quat[:] = self.root_states[:, 3:7]
|
||||
self.base_ang_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity[:] = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
self.obs_imu_latency_buffer[:, :, 1:] = self.obs_imu_latency_buffer[
|
||||
:, :, :self.cfg.domain_rand.range_obs_imu_latency[1]].clone()
|
||||
self.obs_imu_latency_buffer[:, :, 0] = torch.cat(
|
||||
(self.base_ang_vel * self.obs_scales.ang_vel, self.projected_gravity), 1).clone()
|
||||
|
||||
def post_physics_step(self):
|
||||
""" check terminations, compute observations and rewards
|
||||
calls self._post_physics_step_callback() for common computations
|
||||
calls self._draw_debug_vis() if needed
|
||||
"""
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
|
||||
self.episode_length_buf += 1
|
||||
self.common_step_counter += 1
|
||||
|
||||
# prepare quantities
|
||||
self.base_pos[:] = self.root_states[:, 0:3]
|
||||
self.base_quat[:] = self.root_states[:, 3:7]
|
||||
self.rpy[:] = get_euler_xyz_in_tensor(self.base_quat[:])
|
||||
self.base_lin_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity[:] = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
|
||||
self._post_physics_step_callback()
|
||||
|
||||
# compute observations, rewards, resets, ...
|
||||
self.check_termination()
|
||||
self.compute_reward()
|
||||
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
|
||||
self.reset_idx(env_ids)
|
||||
|
||||
if self.cfg.domain_rand.push_robots:
|
||||
self._push_robots()
|
||||
|
||||
self.compute_observations() # in some cases a simulation step might be required to refresh some obs (for example body positions)
|
||||
|
||||
self.last_actions[:] = self.actions[:]
|
||||
self.last_dof_vel[:] = self.dof_vel[:]
|
||||
self.last_root_vel[:] = self.root_states[:, 7:13]
|
||||
|
||||
def check_termination(self):
|
||||
""" Check if environments need to be reset
|
||||
"""
|
||||
self.reset_buf = torch.any(torch.norm(self.contact_forces[:, self.termination_contact_indices, :], dim=-1) > 1., dim=1)
|
||||
self.reset_buf |= torch.logical_or(torch.abs(self.rpy[:,1])>1.0, torch.abs(self.rpy[:,0])>0.8)
|
||||
self.time_out_buf = self.episode_length_buf > self.max_episode_length # no terminal reward for time-outs
|
||||
self.reset_buf |= self.time_out_buf
|
||||
|
||||
def reset_idx(self, env_ids):
|
||||
""" Reset some environments.
|
||||
Calls self._reset_dofs(env_ids), self._reset_root_states(env_ids), and self._resample_commands(env_ids)
|
||||
[Optional] calls self._update_terrain_curriculum(env_ids), self.update_command_curriculum(env_ids) and
|
||||
Logs episode info
|
||||
Resets some buffers
|
||||
|
||||
Args:
|
||||
env_ids (list[int]): List of environment ids which must be reset
|
||||
"""
|
||||
if len(env_ids) == 0:
|
||||
return
|
||||
|
||||
# reset robot states
|
||||
self._reset_dofs(env_ids)
|
||||
self._reset_root_states(env_ids)
|
||||
|
||||
self._resample_commands(env_ids)
|
||||
|
||||
# reset buffers
|
||||
self.actions[env_ids] = 0.
|
||||
self.last_actions[env_ids] = 0.
|
||||
self.last_dof_vel[env_ids] = 0.
|
||||
self.feet_air_time[env_ids] = 0.
|
||||
self.episode_length_buf[env_ids] = 0
|
||||
self.reset_buf[env_ids] = 1
|
||||
# fill extras
|
||||
self.extras["episode"] = {}
|
||||
for key in self.episode_sums.keys():
|
||||
self.extras["episode"]['rew_' + key] = torch.mean(self.episode_sums[key][env_ids]) / self.max_episode_length_s
|
||||
self.episode_sums[key][env_ids] = 0.
|
||||
if self.cfg.commands.curriculum:
|
||||
self.extras["episode"]["max_command_x"] = self.command_ranges["lin_vel_x"][1]
|
||||
# send timeout info to the algorithm
|
||||
if self.cfg.env.send_timeouts:
|
||||
self.extras["time_outs"] = self.time_out_buf
|
||||
|
||||
def compute_reward(self):
|
||||
""" Compute rewards
|
||||
Calls each reward function which had a non-zero scale (processed in self._prepare_reward_function())
|
||||
adds each terms to the episode sums and to the total reward
|
||||
"""
|
||||
self.rew_buf[:] = 0.
|
||||
for i in range(len(self.reward_functions)):
|
||||
name = self.reward_names[i]
|
||||
rew = self.reward_functions[i]() * self.reward_scales[name]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums[name] += rew
|
||||
if self.cfg.rewards.only_positive_rewards:
|
||||
self.rew_buf[:] = torch.clip(self.rew_buf[:], min=0.)
|
||||
# add termination reward after clipping
|
||||
if "termination" in self.reward_scales:
|
||||
rew = self._reward_termination() * self.reward_scales["termination"]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums["termination"] += rew
|
||||
|
||||
def compute_observations(self):
|
||||
""" Computes observations
|
||||
"""
|
||||
self.obs_buf = torch.cat(( self.base_lin_vel * self.obs_scales.lin_vel,
|
||||
self.base_ang_vel * self.obs_scales.ang_vel,
|
||||
self.projected_gravity,
|
||||
self.commands[:, :3] * self.commands_scale,
|
||||
(self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos,
|
||||
self.dof_vel * self.obs_scales.dof_vel,
|
||||
self.actions
|
||||
),dim=-1)
|
||||
# add perceptive inputs if not blind
|
||||
# add noise if needed
|
||||
if self.add_noise:
|
||||
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
|
||||
|
||||
def create_sim(self):
|
||||
""" Creates simulation, terrain and evironments
|
||||
"""
|
||||
self.up_axis_idx = 2 # 2 for z, 1 for y -> adapt gravity accordingly
|
||||
self.sim = self.gym.create_sim(self.sim_device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
|
||||
self._create_ground_plane()
|
||||
self._create_envs()
|
||||
|
||||
def set_camera(self, position, lookat):
|
||||
""" Set camera position and direction
|
||||
"""
|
||||
cam_pos = gymapi.Vec3(position[0], position[1], position[2])
|
||||
cam_target = gymapi.Vec3(lookat[0], lookat[1], lookat[2])
|
||||
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
|
||||
|
||||
#------------- Callbacks --------------
|
||||
def _process_rigid_shape_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the rigid shape properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: randomizes the friction of each environment
|
||||
|
||||
Args:
|
||||
props (List[gymapi.RigidShapeProperties]): Properties of each shape of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[List[gymapi.RigidShapeProperties]]: Modified rigid shape properties
|
||||
"""
|
||||
if self.cfg.domain_rand.randomize_friction:
|
||||
if env_id==0:
|
||||
# prepare friction randomization
|
||||
friction_range = self.cfg.domain_rand.friction_range
|
||||
restitution_range = self.cfg.domain_rand.restitution_range
|
||||
num_buckets = 256
|
||||
bucket_ids = torch.randint(0, num_buckets, (self.num_envs, 1))
|
||||
friction_buckets = torch_rand_float(friction_range[0], friction_range[1], (num_buckets,1), device='cpu')
|
||||
restitution_buckets = torch_rand_float(restitution_range[0], restitution_range[1], (num_buckets, 1),device='cpu')
|
||||
self.friction_coeffs = friction_buckets[bucket_ids]
|
||||
self.restitution_coeffs = restitution_buckets[bucket_ids]
|
||||
|
||||
for s in range(len(props)):
|
||||
props[s].friction = self.friction_coeffs[env_id]
|
||||
props[s].restitution = self.restitution_coeffs[env_id]
|
||||
|
||||
self.env_frictions[env_id] = self.friction_coeffs[env_id]
|
||||
|
||||
return props
|
||||
|
||||
def _process_dof_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the DOF properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: stores position, velocity and torques limits defined in the URDF
|
||||
|
||||
Args:
|
||||
props (numpy.array): Properties of each DOF of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[numpy.array]: Modified DOF properties
|
||||
"""
|
||||
if env_id==0:
|
||||
self.dof_pos_limits = torch.zeros(self.num_dof, 2, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.dof_vel_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.torque_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(len(props)):
|
||||
self.dof_pos_limits[i, 0] = props["lower"][i].item()
|
||||
self.dof_pos_limits[i, 1] = props["upper"][i].item()
|
||||
self.dof_vel_limits[i] = props["velocity"][i].item()
|
||||
self.torque_limits[i] = props["effort"][i].item()
|
||||
# soft limits
|
||||
m = (self.dof_pos_limits[i, 0] + self.dof_pos_limits[i, 1]) / 2
|
||||
r = self.dof_pos_limits[i, 1] - self.dof_pos_limits[i, 0]
|
||||
self.dof_pos_limits[i, 0] = m - 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
self.dof_pos_limits[i, 1] = m + 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
|
||||
# randomization of the motor torques for real machine
|
||||
if self.cfg.domain_rand.randomize_calculated_torque:
|
||||
self.torque_multiplier[env_id, :] = torch_rand_float(self.cfg.domain_rand.torque_multiplier_range[0],
|
||||
self.cfg.domain_rand.torque_multiplier_range[1],
|
||||
(1, self.num_actions), device=self.device)
|
||||
|
||||
# randomization of the motor zero calibration for real machine
|
||||
if self.cfg.domain_rand.randomize_motor_zero_offset:
|
||||
self.motor_zero_offsets[env_id, :] = torch_rand_float(self.cfg.domain_rand.motor_zero_offset_range[0],
|
||||
self.cfg.domain_rand.motor_zero_offset_range[1],
|
||||
(1, self.num_actions), device=self.device)
|
||||
|
||||
# randomization of the motor pd gains
|
||||
if self.cfg.domain_rand.randomize_pd_gains:
|
||||
self.p_gains_multiplier[env_id, :] = torch_rand_float(self.cfg.domain_rand.stiffness_multiplier_range[0],
|
||||
self.cfg.domain_rand.stiffness_multiplier_range[1],
|
||||
(1, self.num_actions), device=self.device)
|
||||
self.d_gains_multiplier[env_id, :] = torch_rand_float(self.cfg.domain_rand.damping_multiplier_range[0],
|
||||
self.cfg.domain_rand.damping_multiplier_range[1],
|
||||
(1, self.num_actions), device=self.device)
|
||||
|
||||
# randomization of the motor frictions in issac gym
|
||||
if self.cfg.domain_rand.randomize_joint_friction:
|
||||
self.joint_friction_coeffs[env_id, 0] = torch_rand_float(self.cfg.domain_rand.joint_friction_range[0],
|
||||
self.cfg.domain_rand.joint_friction_range[1],
|
||||
(1, 1), device=self.device)
|
||||
|
||||
# randomization of the motor dampings in issac gym
|
||||
if self.cfg.domain_rand.randomize_joint_damping:
|
||||
self.joint_damping_coeffs[env_id, 0] = torch_rand_float(self.cfg.domain_rand.joint_damping_range[0],
|
||||
self.cfg.domain_rand.joint_damping_range[1], (1, 1),
|
||||
device=self.device)
|
||||
|
||||
# randomization of the motor armature in issac gym
|
||||
if self.cfg.domain_rand.randomize_joint_armature:
|
||||
self.joint_armatures[env_id, 0] = torch_rand_float(self.cfg.domain_rand.joint_armature_range[0],
|
||||
self.cfg.domain_rand.joint_armature_range[1], (1, 1),
|
||||
device=self.device)
|
||||
|
||||
for i in range(len(props)):
|
||||
props["friction"][i] *= self.joint_friction_coeffs[env_id, 0]
|
||||
props["damping"][i] *= self.joint_damping_coeffs[env_id, 0]
|
||||
props["armature"][i] = self.joint_armatures[env_id, 0]
|
||||
|
||||
return props
|
||||
|
||||
def _process_rigid_body_props(self, props, env_id):
|
||||
# if env_id==0:
|
||||
# sum = 0
|
||||
# for i, p in enumerate(props):
|
||||
# sum += p.mass
|
||||
# print(f"Mass of body {i}: {p.mass} (before randomization)")
|
||||
# print(f"Total mass {sum} (before randomization)")
|
||||
# randomize base mass
|
||||
if self.cfg.domain_rand.randomize_base_mass:
|
||||
self.added_base_masses = torch_rand_float(self.cfg.domain_rand.added_base_mass_range[0],
|
||||
self.cfg.domain_rand.added_base_mass_range[1], (1, 1),
|
||||
device=self.device)
|
||||
props[0].mass += self.added_base_masses
|
||||
|
||||
self.body_mass[:] = props[0].mass
|
||||
|
||||
# randomize link masses
|
||||
if self.cfg.domain_rand.randomize_link_mass:
|
||||
self.multiplied_link_masses_ratio = torch_rand_float(self.cfg.domain_rand.multiplied_link_mass_range[0],
|
||||
self.cfg.domain_rand.multiplied_link_mass_range[1],
|
||||
(1, self.num_bodies - 1), device=self.device)
|
||||
|
||||
for i in range(1, len(props)):
|
||||
props[i].mass *= self.multiplied_link_masses_ratio[0, i - 1]
|
||||
|
||||
# randomize base com
|
||||
if self.cfg.domain_rand.randomize_base_com:
|
||||
self.added_base_com = torch_rand_float(self.cfg.domain_rand.added_base_com_range[0],
|
||||
self.cfg.domain_rand.added_base_com_range[1], (1, 3),
|
||||
device=self.device)
|
||||
props[0].com += gymapi.Vec3(self.added_base_com[0, 0], self.added_base_com[0, 1],
|
||||
self.added_base_com[0, 2])
|
||||
|
||||
return props
|
||||
|
||||
def _post_physics_step_callback(self):
|
||||
""" Callback called before computing terminations, rewards, and observations
|
||||
Default behaviour: Compute ang vel command based on target and heading, compute measured terrain heights and randomly push robots
|
||||
"""
|
||||
#
|
||||
env_ids = (self.episode_length_buf % int(self.cfg.commands.resampling_time / self.dt)==0).nonzero(as_tuple=False).flatten()
|
||||
self._resample_commands(env_ids)
|
||||
if self.cfg.commands.heading_command:
|
||||
forward = quat_apply(self.base_quat, self.forward_vec)
|
||||
heading = torch.atan2(forward[:, 1], forward[:, 0])
|
||||
self.commands[:, 2] = torch.clip(0.5*wrap_to_pi(self.commands[:, 3] - heading), -1., 1.)
|
||||
|
||||
if self.cfg.domain_rand.push_robots and (self.common_step_counter % self.cfg.domain_rand.push_interval == 0):
|
||||
self._push_robots()
|
||||
|
||||
|
||||
def _resample_commands(self, env_ids):
|
||||
""" Randommly select commands of some environments
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environments ids for which new commands are needed
|
||||
"""
|
||||
self.commands[env_ids, 0] = torch_rand_float(self.command_ranges["lin_vel_x"][0], self.command_ranges["lin_vel_x"][1], (len(env_ids), 1), device=self.device).squeeze(1)
|
||||
self.commands[env_ids, 1] = torch_rand_float(self.command_ranges["lin_vel_y"][0], self.command_ranges["lin_vel_y"][1], (len(env_ids), 1), device=self.device).squeeze(1)
|
||||
if self.cfg.commands.heading_command:
|
||||
self.commands[env_ids, 3] = torch_rand_float(self.command_ranges["heading"][0], self.command_ranges["heading"][1], (len(env_ids), 1), device=self.device).squeeze(1)
|
||||
else:
|
||||
self.commands[env_ids, 2] = torch_rand_float(self.command_ranges["ang_vel_yaw"][0], self.command_ranges["ang_vel_yaw"][1], (len(env_ids), 1), device=self.device).squeeze(1)
|
||||
|
||||
# set small commands to zero
|
||||
self.commands[env_ids, :2] *= (torch.norm(self.commands[env_ids, :2], dim=1) > 0.2).unsqueeze(1)
|
||||
|
||||
def _compute_torques(self, actions):
|
||||
""" Compute torques from actions.
|
||||
Actions can be interpreted as position or velocity targets given to a PD controller, or directly as scaled torques.
|
||||
[NOTE]: torques must have the same dimension as the number of DOFs, even if some DOFs are not actuated.
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Actions
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Torques sent to the simulation
|
||||
"""
|
||||
#pd controller
|
||||
actions_scaled = actions * self.cfg.control.action_scale
|
||||
control_type = self.cfg.control.control_type
|
||||
if control_type=="P":
|
||||
torques = self.p_gains*(actions_scaled + self.default_dof_pos - self.dof_pos) - self.d_gains*self.dof_vel
|
||||
elif control_type=="V":
|
||||
torques = self.p_gains*(actions_scaled - self.dof_vel) - self.d_gains*(self.dof_vel - self.last_dof_vel)/self.sim_params.dt
|
||||
elif control_type=="T":
|
||||
torques = actions_scaled
|
||||
else:
|
||||
raise NameError(f"Unknown controller type: {control_type}")
|
||||
return torch.clip(torques, -self.torque_limits, self.torque_limits)
|
||||
|
||||
def _reset_dofs(self, env_ids):
|
||||
""" Resets DOF position and velocities of selected environmments
|
||||
Positions are randomly selected within 0.5:1.5 x default positions.
|
||||
Velocities are set to zero.
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
self.dof_pos[env_ids] = self.default_dof_pos * torch_rand_float(0.5, 1.5, (len(env_ids), self.num_dof), device=self.device)
|
||||
self.dof_vel[env_ids] = 0.
|
||||
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_dof_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.dof_state),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
def _reset_root_states(self, env_ids):
|
||||
""" Resets ROOT states position and velocities of selected environmments
|
||||
Sets base position based on the curriculum
|
||||
Selects randomized base velocities within -0.5:0.5 [m/s, rad/s]
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
# base position
|
||||
if self.custom_origins:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
self.root_states[env_ids, :2] += torch_rand_float(-1., 1., (len(env_ids), 2), device=self.device) # xy position within 1m of the center
|
||||
else:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
# base velocities
|
||||
self.root_states[env_ids, 7:13] = torch_rand_float(-0.5, 0.5, (len(env_ids), 6), device=self.device) # [7:10]: lin vel, [10:13]: ang vel
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_actor_root_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.root_states),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
if self.cfg.domain_rand.randomize_imu_init:
|
||||
yaw_angles = torch_rand_sqrt_float(self.cfg.domain_rand.imu_init_yaw_range[0],
|
||||
self.cfg.domain_rand.imu_init_yaw_range[1], (len(env_ids),),
|
||||
device=self.device)
|
||||
# 构造四元数
|
||||
quat = torch.zeros((len(env_ids), 4), device=self.device)
|
||||
quat[:, 2] = torch.sin(yaw_angles / 2)
|
||||
quat[:, 3] = torch.cos(yaw_angles / 2)
|
||||
# 直接覆盖
|
||||
self.root_states[env_ids, 3:7] = quat
|
||||
|
||||
def _push_robots(self):
|
||||
""" Random pushes the robots. Emulates an impulse by setting a randomized base velocity.
|
||||
"""
|
||||
env_ids = torch.arange(self.num_envs, device=self.device)
|
||||
push_env_ids = env_ids[self.episode_length_buf[env_ids] % int(self.cfg.domain_rand.push_interval) == 0]
|
||||
if len(push_env_ids) == 0:
|
||||
return
|
||||
max_vel = self.cfg.domain_rand.max_push_vel_xy
|
||||
self.root_states[:, 7:9] = torch_rand_float(-max_vel, max_vel, (self.num_envs, 2), device=self.device) # lin vel x/y
|
||||
|
||||
env_ids_int32 = push_env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_actor_root_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.root_states),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
|
||||
|
||||
def update_command_curriculum(self, env_ids):
|
||||
""" Implements a curriculum of increasing commands
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): ids of environments being reset
|
||||
"""
|
||||
# If the tracking reward is above 80% of the maximum, increase the range of commands
|
||||
if torch.mean(self.episode_sums["tracking_lin_vel"][env_ids]) / self.max_episode_length > 0.8 * self.reward_scales["tracking_lin_vel"]:
|
||||
self.command_ranges["lin_vel_x"][0] = np.clip(self.command_ranges["lin_vel_x"][0] - 0.5, -self.cfg.commands.max_curriculum, 0.)
|
||||
self.command_ranges["lin_vel_x"][1] = np.clip(self.command_ranges["lin_vel_x"][1] + 0.5, 0., self.cfg.commands.max_curriculum)
|
||||
|
||||
|
||||
def _get_noise_scale_vec(self, cfg):
|
||||
""" Sets a vector used to scale the noise added to the observations.
|
||||
[NOTE]: Must be adapted when changing the observations structure
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Vector of scales used to multiply a uniform distribution in [-1, 1]
|
||||
"""
|
||||
noise_vec = torch.zeros_like(self.obs_buf[0])
|
||||
self.add_noise = self.cfg.noise.add_noise
|
||||
noise_scales = self.cfg.noise.noise_scales
|
||||
noise_level = self.cfg.noise.noise_level
|
||||
noise_vec[:3] = noise_scales.lin_vel * noise_level * self.obs_scales.lin_vel
|
||||
noise_vec[3:6] = noise_scales.ang_vel * noise_level * self.obs_scales.ang_vel
|
||||
noise_vec[6:9] = noise_scales.gravity * noise_level
|
||||
noise_vec[9:12] = 0. # commands
|
||||
noise_vec[12:12+self.num_actions] = noise_scales.dof_pos * noise_level * self.obs_scales.dof_pos
|
||||
noise_vec[12+self.num_actions:12+2*self.num_actions] = noise_scales.dof_vel * noise_level * self.obs_scales.dof_vel
|
||||
noise_vec[12+2*self.num_actions:12+3*self.num_actions] = 0. # previous actions
|
||||
|
||||
return noise_vec
|
||||
|
||||
#----------------------------------------
|
||||
def _init_buffers(self):
|
||||
""" Initialize torch tensors which will contain simulation states and processed quantities
|
||||
"""
|
||||
# get gym GPU state tensors
|
||||
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
|
||||
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
|
||||
net_contact_forces = self.gym.acquire_net_contact_force_tensor(self.sim)
|
||||
rigid_body_state = self.gym.acquire_rigid_body_state_tensor(self.sim)
|
||||
|
||||
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
self.gym.refresh_rigid_body_state_tensor(self.sim)
|
||||
|
||||
# create some wrapper tensors for different slices
|
||||
self.root_states = gymtorch.wrap_tensor(actor_root_state)
|
||||
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
|
||||
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
|
||||
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
|
||||
self.base_quat = self.root_states[:, 3:7]
|
||||
self.rpy = get_euler_xyz_in_tensor(self.base_quat)
|
||||
self.base_pos = self.root_states[:self.num_envs, 0:3]
|
||||
self.contact_forces = gymtorch.wrap_tensor(net_contact_forces).view(self.num_envs, -1, 3) # shape: num_envs, num_bodies, xyz axis
|
||||
|
||||
# initialize some data used later on
|
||||
self.common_step_counter = 0
|
||||
self.extras = {}
|
||||
self.noise_scale_vec = self._get_noise_scale_vec(self.cfg)
|
||||
self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
|
||||
self.forward_vec = to_torch([1., 0., 0.], device=self.device).repeat((self.num_envs, 1))
|
||||
self.torques = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.p_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.d_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.last_actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.last_dof_vel = torch.zeros_like(self.dof_vel)
|
||||
self.last_root_vel = torch.zeros_like(self.root_states[:, 7:13])
|
||||
self.commands = torch.zeros(self.num_envs, self.cfg.commands.num_commands, dtype=torch.float, device=self.device, requires_grad=False) # x vel, y vel, yaw vel, heading
|
||||
self.commands_scale = torch.tensor([self.obs_scales.lin_vel, self.obs_scales.lin_vel, self.obs_scales.ang_vel], device=self.device, requires_grad=False,) # TODO change this
|
||||
self.feet_air_time = torch.zeros(self.num_envs, self.feet_indices.shape[0], dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.last_contacts = torch.zeros(self.num_envs, len(self.feet_indices), dtype=torch.bool, device=self.device, requires_grad=False)
|
||||
self.base_lin_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
|
||||
|
||||
# joint positions offsets and PD gains
|
||||
self.default_dof_pos = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(self.num_dofs):
|
||||
name = self.dof_names[i]
|
||||
angle = self.cfg.init_state.default_joint_angles[name]
|
||||
self.default_dof_pos[i] = angle
|
||||
found = False
|
||||
for dof_name in self.cfg.control.stiffness.keys():
|
||||
if dof_name in name:
|
||||
self.p_gains[i] = self.cfg.control.stiffness[dof_name]
|
||||
self.d_gains[i] = self.cfg.control.damping[dof_name]
|
||||
found = True
|
||||
if not found:
|
||||
self.p_gains[i] = 0.
|
||||
self.d_gains[i] = 0.
|
||||
if self.cfg.control.control_type in ["P", "V"]:
|
||||
print(f"PD gain of joint {name} were not defined, setting them to zero")
|
||||
|
||||
self.rand_push_force = torch.zeros((self.num_envs, 3), dtype=torch.float32, device=self.device)
|
||||
self.rand_push_torque = torch.zeros((self.num_envs, 3), dtype=torch.float32, device=self.device)
|
||||
self.default_dof_pos = self.default_dof_pos.unsqueeze(0)
|
||||
|
||||
self.default_joint_pd_target = self.default_dof_pos.clone()
|
||||
self.cmd_action_latency_buffer = torch.zeros(self.num_envs, self.num_actions,
|
||||
self.cfg.domain_rand.range_cmd_action_latency[1] + 1,
|
||||
device=self.device)
|
||||
self.obs_motor_latency_buffer = torch.zeros(self.num_envs, self.num_actions * 2,
|
||||
self.cfg.domain_rand.range_obs_motor_latency[1] + 1,
|
||||
device=self.device)
|
||||
self.obs_imu_latency_buffer = torch.zeros(self.num_envs, 6, self.cfg.domain_rand.range_obs_imu_latency[1] + 1,
|
||||
device=self.device)
|
||||
self.cmd_action_latency_simstep = torch.zeros(self.num_envs, dtype=torch.long, device=self.device)
|
||||
self.obs_motor_latency_simstep = torch.zeros(self.num_envs, dtype=torch.long, device=self.device)
|
||||
self.obs_imu_latency_simstep = torch.zeros(self.num_envs, dtype=torch.long, device=self.device)
|
||||
|
||||
def _prepare_reward_function(self):
|
||||
""" Prepares a list of reward functions, whcih will be called to compute the total reward.
|
||||
Looks for self._reward_<REWARD_NAME>, where <REWARD_NAME> are names of all non zero reward scales in the cfg.
|
||||
"""
|
||||
# remove zero scales + multiply non-zero ones by dt
|
||||
for key in list(self.reward_scales.keys()):
|
||||
scale = self.reward_scales[key]
|
||||
if scale==0:
|
||||
self.reward_scales.pop(key)
|
||||
else:
|
||||
self.reward_scales[key] *= self.dt
|
||||
# prepare list of functions
|
||||
self.reward_functions = []
|
||||
self.reward_names = []
|
||||
for name, scale in self.reward_scales.items():
|
||||
if name=="termination":
|
||||
continue
|
||||
self.reward_names.append(name)
|
||||
name = '_reward_' + name
|
||||
self.reward_functions.append(getattr(self, name))
|
||||
|
||||
# reward episode sums
|
||||
self.episode_sums = {name: torch.zeros(self.num_envs, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for name in self.reward_scales.keys()}
|
||||
|
||||
def _create_ground_plane(self):
|
||||
""" Adds a ground plane to the simulation, sets friction and restitution based on the cfg.
|
||||
"""
|
||||
plane_params = gymapi.PlaneParams()
|
||||
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
|
||||
plane_params.static_friction = self.cfg.terrain.static_friction
|
||||
plane_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
plane_params.restitution = self.cfg.terrain.restitution
|
||||
self.gym.add_ground(self.sim, plane_params)
|
||||
|
||||
def _create_envs(self):
|
||||
""" Creates environments:
|
||||
1. loads the robot URDF/MJCF asset,
|
||||
2. For each environment
|
||||
2.1 creates the environment,
|
||||
2.2 calls DOF and Rigid shape properties callbacks,
|
||||
2.3 create actor with these properties and add them to the env
|
||||
3. Store indices of different bodies of the robot
|
||||
"""
|
||||
asset_path = self.cfg.asset.file.format(LEGGED_GYM_ROOT_DIR=LEGGED_GYM_ROOT_DIR)
|
||||
asset_root = os.path.dirname(asset_path)
|
||||
asset_file = os.path.basename(asset_path)
|
||||
|
||||
asset_options = gymapi.AssetOptions()
|
||||
asset_options.default_dof_drive_mode = self.cfg.asset.default_dof_drive_mode
|
||||
asset_options.collapse_fixed_joints = self.cfg.asset.collapse_fixed_joints
|
||||
asset_options.replace_cylinder_with_capsule = self.cfg.asset.replace_cylinder_with_capsule
|
||||
asset_options.flip_visual_attachments = self.cfg.asset.flip_visual_attachments
|
||||
asset_options.fix_base_link = self.cfg.asset.fix_base_link
|
||||
asset_options.density = self.cfg.asset.density
|
||||
asset_options.angular_damping = self.cfg.asset.angular_damping
|
||||
asset_options.linear_damping = self.cfg.asset.linear_damping
|
||||
asset_options.max_angular_velocity = self.cfg.asset.max_angular_velocity
|
||||
asset_options.max_linear_velocity = self.cfg.asset.max_linear_velocity
|
||||
asset_options.armature = self.cfg.asset.armature
|
||||
asset_options.thickness = self.cfg.asset.thickness
|
||||
asset_options.disable_gravity = self.cfg.asset.disable_gravity
|
||||
|
||||
robot_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
|
||||
self.num_dof = self.gym.get_asset_dof_count(robot_asset)
|
||||
self.num_bodies = self.gym.get_asset_rigid_body_count(robot_asset)
|
||||
dof_props_asset = self.gym.get_asset_dof_properties(robot_asset)
|
||||
rigid_shape_props_asset = self.gym.get_asset_rigid_shape_properties(robot_asset)
|
||||
|
||||
# save body names from the asset
|
||||
body_names = self.gym.get_asset_rigid_body_names(robot_asset)
|
||||
self.dof_names = self.gym.get_asset_dof_names(robot_asset)
|
||||
|
||||
self.num_bodies = len(body_names)
|
||||
self.num_dofs = len(self.dof_names)
|
||||
|
||||
feet_names = [s for s in body_names if self.cfg.asset.foot_name in s]
|
||||
penalized_contact_names = []
|
||||
for name in self.cfg.asset.penalize_contacts_on:
|
||||
penalized_contact_names.extend([s for s in body_names if name in s])
|
||||
termination_contact_names = []
|
||||
for name in self.cfg.asset.terminate_after_contacts_on:
|
||||
termination_contact_names.extend([s for s in body_names if name in s])
|
||||
|
||||
wheel_names = []
|
||||
for name in self.cfg.asset.wheel_name:
|
||||
wheel_names.extend([s for s in self.dof_names if name in s])
|
||||
|
||||
base_init_state_list = self.cfg.init_state.pos + self.cfg.init_state.rot + self.cfg.init_state.lin_vel + self.cfg.init_state.ang_vel
|
||||
self.base_init_state = to_torch(base_init_state_list, device=self.device, requires_grad=False)
|
||||
start_pose = gymapi.Transform()
|
||||
start_pose.p = gymapi.Vec3(*self.base_init_state[:3])
|
||||
|
||||
self.joint_friction_coeffs = torch.ones(self.num_envs, 1, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
|
||||
self.joint_damping_coeffs = torch.ones(self.num_envs, 1, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
|
||||
self.joint_armatures = torch.zeros(self.num_envs, 1, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
|
||||
self.torque_multiplier = torch.ones(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.motor_zero_offsets = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.p_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.d_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.p_gains_multiplier = torch.ones(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.d_gains_multiplier = torch.ones(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
|
||||
self._get_env_origins()
|
||||
env_lower = gymapi.Vec3(0., 0., 0.)
|
||||
env_upper = gymapi.Vec3(0., 0., 0.)
|
||||
self.actor_handles = []
|
||||
self.envs = []
|
||||
self.env_frictions = torch.zeros(self.num_envs, 1, dtype=torch.float32, device=self.device)
|
||||
self.body_mass = torch.zeros(self.num_envs, 1, dtype=torch.float32, device=self.device, requires_grad=False)
|
||||
|
||||
for i in range(self.num_envs):
|
||||
# create env instance
|
||||
env_handle = self.gym.create_env(self.sim, env_lower, env_upper, int(np.sqrt(self.num_envs)))
|
||||
pos = self.env_origins[i].clone()
|
||||
pos[:2] += torch_rand_float(-1., 1., (2,1), device=self.device).squeeze(1)
|
||||
start_pose.p = gymapi.Vec3(*pos)
|
||||
|
||||
rigid_shape_props = self._process_rigid_shape_props(rigid_shape_props_asset, i)
|
||||
self.gym.set_asset_rigid_shape_properties(robot_asset, rigid_shape_props)
|
||||
actor_handle = self.gym.create_actor(env_handle, robot_asset, start_pose, self.cfg.asset.name, i, self.cfg.asset.self_collisions, 0)
|
||||
dof_props = self._process_dof_props(dof_props_asset, i)
|
||||
self.gym.set_actor_dof_properties(env_handle, actor_handle, dof_props)
|
||||
body_props = self.gym.get_actor_rigid_body_properties(env_handle, actor_handle)
|
||||
body_props = self._process_rigid_body_props(body_props, i)
|
||||
self.gym.set_actor_rigid_body_properties(env_handle, actor_handle, body_props, recomputeInertia=True)
|
||||
self.envs.append(env_handle)
|
||||
self.actor_handles.append(actor_handle)
|
||||
|
||||
self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
|
||||
for i in range(len(feet_names)):
|
||||
self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], feet_names[i])
|
||||
|
||||
self.penalised_contact_indices = torch.zeros(len(penalized_contact_names), dtype=torch.long, device=self.device, requires_grad=False)
|
||||
for i in range(len(penalized_contact_names)):
|
||||
self.penalised_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], penalized_contact_names[i])
|
||||
|
||||
self.termination_contact_indices = torch.zeros(len(termination_contact_names), dtype=torch.long, device=self.device, requires_grad=False)
|
||||
for i in range(len(termination_contact_names)):
|
||||
self.termination_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0], termination_contact_names[i])
|
||||
|
||||
def _get_env_origins(self):
|
||||
""" Sets environment origins. On rough terrain the origins are defined by the terrain platforms.
|
||||
Otherwise create a grid.
|
||||
"""
|
||||
|
||||
self.custom_origins = False
|
||||
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
|
||||
# create a grid of robots
|
||||
num_cols = np.floor(np.sqrt(self.num_envs))
|
||||
num_rows = np.ceil(self.num_envs / num_cols)
|
||||
xx, yy = torch.meshgrid(torch.arange(num_rows), torch.arange(num_cols))
|
||||
spacing = self.cfg.env.env_spacing
|
||||
self.env_origins[:, 0] = spacing * xx.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 1] = spacing * yy.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 2] = 0.
|
||||
|
||||
def _parse_cfg(self, cfg):
|
||||
self.dt = self.cfg.control.decimation * self.sim_params.dt
|
||||
self.obs_scales = self.cfg.normalization.obs_scales
|
||||
self.reward_scales = class_to_dict(self.cfg.rewards.scales)
|
||||
self.command_ranges = class_to_dict(self.cfg.commands.ranges)
|
||||
|
||||
|
||||
self.max_episode_length_s = self.cfg.env.episode_length_s
|
||||
self.max_episode_length = np.ceil(self.max_episode_length_s / self.dt)
|
||||
|
||||
self.cfg.domain_rand.push_interval = np.ceil(self.cfg.domain_rand.push_interval_s / self.dt)
|
||||
|
||||
#------------ reward functions----------------
|
||||
def _reward_lin_vel_z(self):
|
||||
# Penalize z axis base linear velocity
|
||||
return torch.square(self.base_lin_vel[:, 2])
|
||||
|
||||
def _reward_ang_vel_xy(self):
|
||||
# Penalize xy axes base angular velocity
|
||||
return torch.sum(torch.square(self.base_ang_vel[:, :2]), dim=1)
|
||||
|
||||
def _reward_orientation(self):
|
||||
# Penalize non flat base orientation
|
||||
return torch.sum(torch.square(self.projected_gravity[:, :2]), dim=1)
|
||||
|
||||
def _reward_base_height(self):
|
||||
# Penalize base height away from target
|
||||
base_height = self.root_states[:, 2]
|
||||
return torch.square(base_height - self.cfg.rewards.base_height_target)
|
||||
|
||||
def _reward_torques(self):
|
||||
# Penalize torques
|
||||
return torch.sum(torch.square(self.torques), dim=1)
|
||||
|
||||
def _reward_dof_vel(self):
|
||||
# Penalize dof velocities
|
||||
return torch.sum(torch.square(self.dof_vel), dim=1)
|
||||
|
||||
def _reward_dof_acc(self):
|
||||
# Penalize dof accelerations
|
||||
return torch.sum(torch.square((self.last_dof_vel - self.dof_vel) / self.dt), dim=1)
|
||||
|
||||
def _reward_action_rate(self):
|
||||
# Penalize changes in actions
|
||||
return torch.sum(torch.square(self.last_actions - self.actions), dim=1)
|
||||
|
||||
def _reward_collision(self):
|
||||
# Penalize collisions on selected bodies
|
||||
return torch.sum(1.*(torch.norm(self.contact_forces[:, self.penalised_contact_indices, :], dim=-1) > 0.1), dim=1)
|
||||
|
||||
def _reward_termination(self):
|
||||
# Terminal reward / penalty
|
||||
return self.reset_buf * ~self.time_out_buf
|
||||
|
||||
def _reward_dof_pos_limits(self):
|
||||
# Penalize dof positions too close to the limit
|
||||
out_of_limits = -(self.dof_pos - self.dof_pos_limits[:, 0]).clip(max=0.) # lower limit
|
||||
out_of_limits += (self.dof_pos - self.dof_pos_limits[:, 1]).clip(min=0.)
|
||||
return torch.sum(out_of_limits, dim=1)
|
||||
|
||||
def _reward_dof_vel_limits(self):
|
||||
# Penalize dof velocities too close to the limit
|
||||
# clip to max error = 1 rad/s per joint to avoid huge penalties
|
||||
return torch.sum((torch.abs(self.dof_vel) - self.dof_vel_limits*self.cfg.rewards.soft_dof_vel_limit).clip(min=0., max=1.), dim=1)
|
||||
|
||||
def _reward_torque_limits(self):
|
||||
# penalize torques too close to the limit
|
||||
return torch.sum((torch.abs(self.torques) - self.torque_limits*self.cfg.rewards.soft_torque_limit).clip(min=0.), dim=1)
|
||||
|
||||
def _reward_tracking_lin_vel(self):
|
||||
# Tracking of linear velocity commands (xy axes)
|
||||
lin_vel_error = torch.sum(torch.square(self.commands[:, :2] - self.base_lin_vel[:, :2]), dim=1)
|
||||
return torch.exp(-lin_vel_error/self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_tracking_ang_vel(self):
|
||||
# Tracking of angular velocity commands (yaw)
|
||||
ang_vel_error = torch.square(self.commands[:, 2] - self.base_ang_vel[:, 2])
|
||||
return torch.exp(-ang_vel_error/self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_feet_air_time(self):
|
||||
# Reward long steps
|
||||
# Need to filter the contacts because the contact reporting of PhysX is unreliable on meshes
|
||||
contact = self.contact_forces[:, self.feet_indices, 2] > 1.
|
||||
contact_filt = torch.logical_or(contact, self.last_contacts)
|
||||
self.last_contacts = contact
|
||||
first_contact = (self.feet_air_time > 0.) * contact_filt
|
||||
self.feet_air_time += self.dt
|
||||
rew_airTime = torch.sum((self.feet_air_time - 0.5) * first_contact, dim=1) # reward only on first contact with the ground
|
||||
rew_airTime *= torch.norm(self.commands[:, :2], dim=1) > 0.1 #no reward for zero command
|
||||
self.feet_air_time *= ~contact_filt
|
||||
return rew_airTime
|
||||
|
||||
def _reward_stumble(self):
|
||||
# Penalize feet hitting vertical surfaces
|
||||
return torch.any(torch.norm(self.contact_forces[:, self.feet_indices, :2], dim=2) >\
|
||||
5 *torch.abs(self.contact_forces[:, self.feet_indices, 2]), dim=1)
|
||||
|
||||
def _reward_stand_still(self):
|
||||
# Penalize motion at zero commands
|
||||
return torch.sum(torch.abs(self.dof_pos - self.default_dof_pos), dim=1) * (torch.norm(self.commands[:, :2], dim=1) < 0.1)
|
||||
|
||||
def _reward_feet_contact_forces(self):
|
||||
# penalize high contact forces
|
||||
return torch.sum((torch.norm(self.contact_forces[:, self.feet_indices, :], dim=-1) - self.cfg.rewards.max_contact_force).clip(min=0.), dim=1)
|
||||
270
legged_gym/envs/base/legged_robot_config.py
Normal file
270
legged_gym/envs/base/legged_robot_config.py
Normal file
@@ -0,0 +1,270 @@
|
||||
from .base_config import BaseConfig
|
||||
|
||||
class LeggedRobotCfg(BaseConfig):
|
||||
class env:
|
||||
num_envs = 4096
|
||||
num_observations = 54
|
||||
num_privileged_obs = None # if not None a priviledge_obs_buf will be returned by step() (critic obs for assymetric training). None is returned otherwise
|
||||
num_actions = 14
|
||||
env_spacing = 3. # not used with heightfields/trimeshes
|
||||
send_timeouts = True # send time out information to the algorithm
|
||||
episode_length_s = 20 # episode length in seconds
|
||||
test = False
|
||||
|
||||
class terrain:
|
||||
mesh_type = 'plane' # "heightfield" # none, plane, heightfield or trimesh
|
||||
horizontal_scale = 0.1 # [m]
|
||||
vertical_scale = 0.005 # [m]
|
||||
border_size = 25 # [m]
|
||||
curriculum = True
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# rough terrain only:
|
||||
measure_heights = True
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8] # 1mx1.6m rectangle (without center line)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows= 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
|
||||
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
|
||||
# trimesh only:
|
||||
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces
|
||||
|
||||
|
||||
class commands:
|
||||
curriculum = True
|
||||
max_curriculum = 1.
|
||||
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
|
||||
resampling_time = 10. # time before command are changed[s]
|
||||
heading_command = True # if true: compute ang vel command from heading error
|
||||
class ranges:
|
||||
lin_vel_x = [-1.0, 1.0] # min max [m/s]
|
||||
lin_vel_y = [-1.0, 1.0] # min max [m/s]
|
||||
ang_vel_yaw = [-1, 1] # min max [rad/s]
|
||||
heading = [-3.14, 3.14]
|
||||
|
||||
# lin_vel_x = [0, 1.0] # min max [m/s]
|
||||
# lin_vel_y = [0, 0] # min max [m/s]
|
||||
# ang_vel_yaw = [0, 0] # min max [rad/s]
|
||||
# heading = [0, 0]
|
||||
|
||||
class init_state:
|
||||
pos = [0.0, 0.0, 1.] # x,y,z [m]
|
||||
rot = [0.0, 0.0, 0.0, 1.0] # x,y,z,w [quat]
|
||||
lin_vel = [0.0, 0.0, 0.0] # x,y,z [m/s]
|
||||
ang_vel = [0.0, 0.0, 0.0] # x,y,z [rad/s]
|
||||
default_joint_angles = { # target angles when action = 0.0
|
||||
"joint_a": 0.,
|
||||
"joint_b": 0.}
|
||||
|
||||
class control:
|
||||
control_type = 'P' # P: position, V: velocity, T: torques
|
||||
# PD Drive parameters:
|
||||
stiffness = {'joint_a': 10.0, 'joint_b': 15.} # [N*m/rad]
|
||||
damping = {'joint_a': 1.0, 'joint_b': 1.5} # [N*m*s/rad]
|
||||
# action scale: target angle = actionScale * action + defaultAngle
|
||||
action_scale = 0.5
|
||||
# decimation: Number of control action updates @ sim DT per policy DT
|
||||
decimation = 4
|
||||
|
||||
class asset:
|
||||
file = ""
|
||||
name = "legged_robot" # actor name
|
||||
foot_name = "None" # name of the feet bodies, used to index body state and contact force tensors
|
||||
penalize_contacts_on = []
|
||||
terminate_after_contacts_on = []
|
||||
disable_gravity = False
|
||||
collapse_fixed_joints = True # merge bodies connected by fixed joints. Specific fixed joints can be kept by adding " <... dont_collapse="true">
|
||||
fix_base_link = False # fixe the base of the robot
|
||||
default_dof_drive_mode = 3 # see GymDofDriveModeFlags (0 is none, 1 is pos tgt, 2 is vel tgt, 3 effort)
|
||||
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
|
||||
replace_cylinder_with_capsule = True # replace collision cylinders with capsules, leads to faster/more stable simulation
|
||||
flip_visual_attachments = True # Some .obj meshes must be flipped from y-up to z-up
|
||||
|
||||
density = 0.001
|
||||
angular_damping = 0.
|
||||
linear_damping = 0.
|
||||
max_angular_velocity = 1000.
|
||||
max_linear_velocity = 1000.
|
||||
armature = 0.
|
||||
thickness = 0.01
|
||||
|
||||
class domain_rand:
|
||||
randomize_friction = False
|
||||
friction_range = [-0.2, 1.3]
|
||||
|
||||
push_robots = False
|
||||
push_interval_s = 4
|
||||
max_push_vel_xy = 0.4
|
||||
max_push_ang_vel = 0.6
|
||||
|
||||
continuous_push = False
|
||||
max_push_force = 0.5
|
||||
max_push_torque = 0.5
|
||||
push_force_noise = 0.5
|
||||
push_torque_noise = 0.5
|
||||
|
||||
randomize_base_mass = False
|
||||
added_base_mass_range = [-2.5, 2.5]
|
||||
|
||||
randomize_link_mass = False
|
||||
multiplied_link_mass_range = [0.9, 1.1]
|
||||
|
||||
randomize_base_com = False
|
||||
added_base_com_range = [-0.05, 0.05]
|
||||
|
||||
randomize_pd_gains = False
|
||||
stiffness_multiplier_range = [0.8, 1.2]
|
||||
damping_multiplier_range = [0.8, 1.2]
|
||||
|
||||
randomize_calculated_torque = False
|
||||
torque_multiplier_range = [0.8, 1.2]
|
||||
|
||||
randomize_motor_zero_offset = False
|
||||
motor_zero_offset_range = [-0.035, 0.035] # Offset to add to the motor angles
|
||||
|
||||
randomize_joint_friction = False
|
||||
joint_friction_range = [0.01, 1.15]
|
||||
|
||||
randomize_joint_damping = False
|
||||
joint_damping_range = [0.3, 1.5]
|
||||
|
||||
randomize_joint_armature = False
|
||||
joint_armature_range = [0.0001, 0.05] # range to contain the real joint armature
|
||||
|
||||
add_obs_latency = False
|
||||
randomize_obs_motor_latency = False
|
||||
range_obs_motor_latency = [2, 10]
|
||||
randomize_obs_imu_latency = False
|
||||
range_obs_imu_latency = [1, 2]
|
||||
|
||||
add_cmd_action_latency = False
|
||||
randomize_cmd_action_latency = False
|
||||
range_cmd_action_latency = [2, 4]
|
||||
|
||||
randomize_imu_init = True
|
||||
imu_init_yaw_range = [-1.57, 1.57]
|
||||
imu_yaw_range = 0.5
|
||||
|
||||
|
||||
class rewards:
|
||||
class scales:
|
||||
termination = -0.0
|
||||
tracking_lin_vel = 1.0
|
||||
tracking_ang_vel = 0.5
|
||||
lin_vel_z = -2.0
|
||||
ang_vel_xy = -0.05
|
||||
orientation = -0.
|
||||
torques = -0.00001
|
||||
dof_vel = -0.
|
||||
dof_acc = -2.5e-7
|
||||
base_height = -0.
|
||||
feet_air_time = 1.0
|
||||
collision = -1.
|
||||
feet_stumble = -0.0
|
||||
action_rate = -0.01
|
||||
stand_still = -0.
|
||||
|
||||
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
|
||||
tracking_sigma = 0.25 # tracking reward = exp(-error^2/sigma)
|
||||
soft_dof_pos_limit = 1. # percentage of urdf limits, values above this limit are penalized
|
||||
soft_dof_vel_limit = 1.
|
||||
soft_torque_limit = 1.
|
||||
base_height_target = 1.
|
||||
max_contact_force = 100. # forces above this value are penalized
|
||||
|
||||
class normalization:
|
||||
class obs_scales:
|
||||
lin_vel = 2.0
|
||||
ang_vel = 0.25
|
||||
dof_pos = 1.0
|
||||
dof_vel = 0.05
|
||||
height_measurements = 5.0
|
||||
clip_observations = 100.
|
||||
clip_actions = 100.
|
||||
|
||||
class noise:
|
||||
add_noise = True
|
||||
noise_level = 1.0 # scales other values
|
||||
class noise_scales:
|
||||
dof_pos = 0.01
|
||||
dof_vel = 1.5
|
||||
lin_vel = 0.1
|
||||
ang_vel = 0.2
|
||||
gravity = 0.05
|
||||
height_measurements = 0.1
|
||||
|
||||
# viewer camera:
|
||||
class viewer:
|
||||
ref_env = 0
|
||||
pos = [10, 0, 6] # [m]
|
||||
lookat = [11., 5, 3.] # [m]
|
||||
|
||||
class sim:
|
||||
dt = 0.005
|
||||
substeps = 1
|
||||
gravity = [0., 0. ,-9.81] # [m/s^2]
|
||||
up_axis = 1 # 0 is y, 1 is z
|
||||
|
||||
class physx:
|
||||
num_threads = 10
|
||||
solver_type = 1 # 0: pgs, 1: tgs
|
||||
num_position_iterations = 4
|
||||
num_velocity_iterations = 0
|
||||
contact_offset = 0.01 # [m]
|
||||
rest_offset = 0.0 # [m]
|
||||
bounce_threshold_velocity = 0.5 #0.5 [m/s]
|
||||
max_depenetration_velocity = 1.0
|
||||
max_gpu_contact_pairs = 2**23 #2**24 -> needed for 8000 envs and more
|
||||
default_buffer_size_multiplier = 5
|
||||
contact_collection = 2 # 0: never, 1: last sub-step, 2: all sub-steps (default=2)
|
||||
|
||||
class LeggedRobotCfgPPO(BaseConfig):
|
||||
seed = 1
|
||||
runner_class_name = 'OnPolicyRunner'
|
||||
class policy:
|
||||
init_noise_std = 1.0
|
||||
actor_hidden_dims = [512, 256, 128]
|
||||
critic_hidden_dims = [512, 256, 128]
|
||||
activation = 'elu' # can be elu, relu, selu, crelu, lrelu, tanh, sigmoid
|
||||
# only for 'ActorCriticRecurrent':
|
||||
# rnn_type = 'lstm'
|
||||
# rnn_hidden_size = 512
|
||||
# rnn_num_layers = 1
|
||||
|
||||
class algorithm:
|
||||
# training params
|
||||
value_loss_coef = 1.0
|
||||
use_clipped_value_loss = True
|
||||
clip_param = 0.2
|
||||
entropy_coef = 0.01
|
||||
num_learning_epochs = 5
|
||||
num_mini_batches = 4 # mini batch size = num_envs*nsteps / nminibatches
|
||||
learning_rate = 1.e-3 #5.e-4
|
||||
schedule = 'adaptive' # could be adaptive, fixed
|
||||
gamma = 0.99
|
||||
lam = 0.95
|
||||
desired_kl = 0.01
|
||||
max_grad_norm = 1.
|
||||
|
||||
class runner:
|
||||
policy_class_name = 'ActorCritic'
|
||||
algorithm_class_name = 'PPO'
|
||||
num_steps_per_env = 24 # per iteration
|
||||
max_iterations = 1500 # number of policy updates
|
||||
|
||||
# logging
|
||||
save_interval = 50 # check for potential saves every this many iterations
|
||||
experiment_name = 'test'
|
||||
run_name = ''
|
||||
# load and resume
|
||||
resume = False
|
||||
load_run = -1 # -1 = last run
|
||||
checkpoint = -1 # -1 = last saved model
|
||||
resume_path = None # updated from load_run and chkpt
|
||||
223
legged_gym/envs/base/legged_robot_config_fh.py
Normal file
223
legged_gym/envs/base/legged_robot_config_fh.py
Normal file
@@ -0,0 +1,223 @@
|
||||
from .base_config import BaseConfig
|
||||
|
||||
class LeggedRobotCfg_FH(BaseConfig):
|
||||
class env:
|
||||
num_envs = 4096
|
||||
num_observations = 235
|
||||
num_privileged_obs = None # if not None a priviledge_obs_buf will be returned by step() (critic obs for assymetric training). None is returned otherwise
|
||||
num_actions = 12
|
||||
env_spacing = 3. # not used with heightfields/trimeshes
|
||||
send_timeouts = True # send time out information to the algorithm
|
||||
episode_length_s = 20 # episode length in seconds
|
||||
|
||||
class terrain:
|
||||
mesh_type = 'trimesh' # "heightfield" # none, plane, heightfield or trimesh
|
||||
horizontal_scale = 0.1 # [m]
|
||||
vertical_scale = 0.005 # [m]
|
||||
border_size = 25 # [m]
|
||||
curriculum = True
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# rough terrain only:
|
||||
measure_heights = True
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,
|
||||
0.8] # 1mx1.6m rectangle (without center line)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows = 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
|
||||
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
|
||||
# trimesh only:
|
||||
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces
|
||||
|
||||
class commands:
|
||||
curriculum = False
|
||||
max_curriculum = 1.
|
||||
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
|
||||
resampling_time = 10. # time before command are changed[s]
|
||||
heading_command = True # if true: compute ang vel command from heading error
|
||||
|
||||
class ranges:
|
||||
lin_vel_x = [-1.0, 1.0] # min max [m/s]
|
||||
lin_vel_y = [-1.0, 1.0] # min max [m/s]
|
||||
ang_vel_yaw = [-1, 1] # min max [rad/s]
|
||||
heading = [-3.14, 3.14]
|
||||
|
||||
class init_state:
|
||||
pos = [0.0, 0.0, 1.] # x,y,z [m]
|
||||
rot = [0.0, 0.0, 0.0, 1.0] # x,y,z,w [quat]
|
||||
lin_vel = [0.0, 0.0, 0.0] # x,y,z [m/s]
|
||||
ang_vel = [0.0, 0.0, 0.0] # x,y,z [rad/s]
|
||||
default_joint_angles = { # target angles when action = 0.0
|
||||
"joint_a": 0.,
|
||||
"joint_b": 0.}
|
||||
|
||||
class control:
|
||||
control_type = 'P' # P: position, V: velocity, T: torques
|
||||
# PD Drive parameters:
|
||||
stiffness = {'joint_a': 10.0, 'joint_b': 15.} # [N*m/rad]
|
||||
damping = {'joint_a': 1.0, 'joint_b': 1.5} # [N*m*s/rad]
|
||||
# action scale: target angle = actionScale * action + defaultAngle
|
||||
action_scale = 0.5
|
||||
# decimation: Number of control action updates @ sim DT per policy DT
|
||||
decimation = 4
|
||||
wheel_speed = 1
|
||||
arm_pos = 0
|
||||
|
||||
class asset:
|
||||
file = ""
|
||||
name = "legged_robot" # actor name
|
||||
arm_name = ""
|
||||
foot_name = "None" # name of the feet bodies, used to index body state and contact force tensors
|
||||
penalize_contacts_on = []
|
||||
terminate_after_contacts_on = []
|
||||
disable_gravity = False
|
||||
collapse_fixed_joints = True # merge bodies connected by fixed joints. Specific fixed joints can be kept by adding " <... dont_collapse="true">
|
||||
fix_base_link = False # fixe the base of the robot
|
||||
default_dof_drive_mode = 3 # see GymDofDriveModeFlags (0 is none, 1 is pos tgt, 2 is vel tgt, 3 effort)
|
||||
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
|
||||
replace_cylinder_with_capsule = True # replace collision cylinders with capsules, leads to faster/more stable simulation
|
||||
flip_visual_attachments = False # Some .obj meshes must be flipped from y-up to z-up
|
||||
|
||||
density = 0.001
|
||||
angular_damping = 0.
|
||||
linear_damping = 0.
|
||||
max_angular_velocity = 1000.
|
||||
max_linear_velocity = 1000.
|
||||
armature = 0.
|
||||
thickness = 0.01
|
||||
|
||||
class domain_rand:
|
||||
randomize_friction = True
|
||||
friction_range = [0.5, 1.25]
|
||||
randomize_base_mass = False
|
||||
added_mass_range = [-1., 1.]
|
||||
push_robots = True
|
||||
push_interval_s = 15
|
||||
max_push_vel_xy = 1.
|
||||
|
||||
class rewards:
|
||||
class scales:
|
||||
termination = -0.0
|
||||
tracking_lin_vel = 1.0
|
||||
tracking_ang_vel = 0.5
|
||||
lin_vel_z = -2.0
|
||||
ang_vel_xy = -0.05
|
||||
orientation = -0.
|
||||
torques = -0.00001
|
||||
dof_vel = -0.
|
||||
dof_acc = -2.5e-7
|
||||
base_height = -0.
|
||||
feet_air_time = 1.0
|
||||
collision = -1.
|
||||
feet_stumble = -0.0
|
||||
action_rate = -0.01
|
||||
stand_still = -0.
|
||||
|
||||
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
|
||||
tracking_sigma = 0.25 # tracking reward = exp(-error^2/sigma)
|
||||
soft_dof_pos_limit = 1. # percentage of urdf limits, values above this limit are penalized
|
||||
soft_dof_vel_limit = 1.
|
||||
soft_torque_limit = 1.
|
||||
base_height_target = 1.
|
||||
max_contact_force = 100. # forces above this value are penalized
|
||||
|
||||
class normalization:
|
||||
class obs_scales:
|
||||
lin_vel = 2.0
|
||||
ang_vel = 0.25
|
||||
dof_pos = 1.0
|
||||
dof_vel = 0.05
|
||||
height_measurements = 5.0
|
||||
|
||||
clip_observations = 100.
|
||||
clip_actions = 100.
|
||||
|
||||
class noise:
|
||||
add_noise = True
|
||||
noise_level = 1.0 # scales other values
|
||||
|
||||
class noise_scales:
|
||||
dof_pos = 0.01
|
||||
dof_vel = 1.5
|
||||
lin_vel = 0.1
|
||||
ang_vel = 0.2
|
||||
gravity = 0.05
|
||||
height_measurements = 0.1
|
||||
|
||||
# viewer camera:
|
||||
class viewer:
|
||||
ref_env = 0
|
||||
pos = [10, 0, 6] # [m]
|
||||
lookat = [11., 5, 3.] # [m]
|
||||
|
||||
class sim:
|
||||
dt = 0.005
|
||||
substeps = 1
|
||||
gravity = [0., 0., -9.81] # [m/s^2]
|
||||
up_axis = 1 # 0 is y, 1 is z
|
||||
|
||||
class physx:
|
||||
num_threads = 10
|
||||
solver_type = 1 # 0: pgs, 1: tgs
|
||||
num_position_iterations = 4
|
||||
num_velocity_iterations = 0
|
||||
contact_offset = 0.01 # [m]
|
||||
rest_offset = 0.0 # [m]
|
||||
bounce_threshold_velocity = 0.5 # 0.5 [m/s]
|
||||
max_depenetration_velocity = 1.0
|
||||
max_gpu_contact_pairs = 2 ** 23 # 2**24 -> needed for 8000 envs and more
|
||||
default_buffer_size_multiplier = 5
|
||||
contact_collection = 2 # 0: never, 1: last sub-step, 2: all sub-steps (default=2)
|
||||
|
||||
|
||||
class LeggedRobotCfgPPO_FH(BaseConfig):
|
||||
seed = 1
|
||||
runner_class_name = 'OnPolicyRunner'
|
||||
|
||||
class policy:
|
||||
init_noise_std = 1.0
|
||||
actor_hidden_dims = [512, 256, 128]
|
||||
critic_hidden_dims = [512, 256, 128]
|
||||
activation = 'elu' # can be elu, relu, selu, crelu, lrelu, tanh, sigmoid
|
||||
# only for 'ActorCriticRecurrent':
|
||||
# rnn_type = 'lstm'
|
||||
# rnn_hidden_size = 512
|
||||
# rnn_num_layers = 1
|
||||
|
||||
class algorithm:
|
||||
# training params
|
||||
value_loss_coef = 1.0
|
||||
use_clipped_value_loss = True
|
||||
clip_param = 0.2
|
||||
entropy_coef = 0.01
|
||||
num_learning_epochs = 5
|
||||
num_mini_batches = 4 # mini batch size = num_envs*nsteps / nminibatches
|
||||
learning_rate = 1.e-3 # 5.e-4
|
||||
schedule = 'adaptive' # could be adaptive, fixed
|
||||
gamma = 0.99
|
||||
lam = 0.95
|
||||
desired_kl = 0.01
|
||||
max_grad_norm = 1.
|
||||
|
||||
class runner:
|
||||
policy_class_name = 'ActorCritic'
|
||||
algorithm_class_name = 'PPO'
|
||||
num_steps_per_env = 24 # per iteration
|
||||
max_iterations = 1500 # number of policy updates
|
||||
|
||||
# logging
|
||||
save_interval = 50 # check for potential saves every this many iterations
|
||||
experiment_name = 'test'
|
||||
run_name = ''
|
||||
# load and resume
|
||||
resume = False
|
||||
load_run = -1 # -1 = last run
|
||||
checkpoint = -1 # -1 = last saved model
|
||||
resume_path = None # updated from load_run and chkpt
|
||||
937
legged_gym/envs/base/legged_robot_fh.py
Normal file
937
legged_gym/envs/base/legged_robot_fh.py
Normal file
@@ -0,0 +1,937 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, envs
|
||||
import time
|
||||
from warnings import WarningMessage
|
||||
import numpy as np
|
||||
import os
|
||||
|
||||
from isaacgym.torch_utils import *
|
||||
from isaacgym import gymtorch, gymapi, gymutil
|
||||
|
||||
import torch
|
||||
from torch import Tensor
|
||||
from typing import Tuple, Dict
|
||||
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
from legged_gym.envs.base.base_task import BaseTask
|
||||
from legged_gym.utils.terrain import Terrain
|
||||
from legged_gym.utils.math import quat_apply_yaw, wrap_to_pi, torch_rand_sqrt_float
|
||||
from legged_gym.utils.isaacgym_utils import get_euler_xyz as get_euler_xyz_in_tensor
|
||||
from legged_gym.utils.helpers import class_to_dict
|
||||
from .legged_robot_config_fh import LeggedRobotCfg_FH
|
||||
|
||||
class LeggedRobot_FH(BaseTask):
|
||||
def __init__(self, cfg: LeggedRobotCfg_FH, sim_params, physics_engine, sim_device, headless):
|
||||
""" Parses the provided config file,
|
||||
calls create_sim() (which creates, simulation, terrain and environments),
|
||||
initilizes pytorch buffers used during training
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
sim_params (gymapi.SimParams): simulation parameters
|
||||
physics_engine (gymapi.SimType): gymapi.SIM_PHYSX (must be PhysX)
|
||||
device_type (string): 'cuda' or 'cpu'
|
||||
device_id (int): 0, 1, ...
|
||||
headless (bool): Run without rendering if True
|
||||
"""
|
||||
self.cfg = cfg
|
||||
self.sim_params = sim_params
|
||||
self.height_samples = None
|
||||
self.debug_viz = False
|
||||
self.init_done = False
|
||||
self._parse_cfg(self.cfg)
|
||||
super().__init__(self.cfg, sim_params, physics_engine, sim_device, headless)
|
||||
|
||||
if not self.headless:
|
||||
self.set_camera(self.cfg.viewer.pos, self.cfg.viewer.lookat)
|
||||
self._init_buffers()
|
||||
self._prepare_reward_function()
|
||||
self.init_done = True
|
||||
|
||||
def step(self, actions):
|
||||
""" Apply actions, simulate, call self.post_physics_step()
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Tensor of shape (num_envs, num_actions_per_env)
|
||||
"""
|
||||
clip_actions = self.cfg.normalization.clip_actions
|
||||
self.actions = torch.clip(actions, -clip_actions, clip_actions).to(self.device)
|
||||
# step physics and render each frame
|
||||
self.render()
|
||||
for _ in range(self.cfg.control.decimation):
|
||||
self.torques = self._compute_torques(self.actions).view(self.torques.shape)
|
||||
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(self.torques))
|
||||
self.gym.simulate(self.sim)
|
||||
if self.device == 'cpu':
|
||||
self.gym.fetch_results(self.sim, True)
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.post_physics_step()
|
||||
|
||||
# return clipped obs, clipped states (None), rewards, dones and infos
|
||||
clip_obs = self.cfg.normalization.clip_observations
|
||||
self.obs_buf = torch.clip(self.obs_buf, -clip_obs, clip_obs)
|
||||
if self.privileged_obs_buf is not None:
|
||||
self.privileged_obs_buf = torch.clip(self.privileged_obs_buf, -clip_obs, clip_obs)
|
||||
return self.obs_buf, self.privileged_obs_buf, self.rew_buf, self.reset_buf, self.extras
|
||||
|
||||
def post_physics_step(self):
|
||||
""" check terminations, compute observations and rewards
|
||||
calls self._post_physics_step_callback() for common computations
|
||||
calls self._draw_debug_vis() if needed
|
||||
"""
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
|
||||
self.episode_length_buf += 1
|
||||
self.common_step_counter += 1
|
||||
|
||||
# prepare quantities
|
||||
self.base_quat[:] = self.root_states[:, 3:7]
|
||||
self.base_lin_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity[:] = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
|
||||
self._post_physics_step_callback()
|
||||
|
||||
# compute observations, rewards, resets, ...
|
||||
self.check_termination()
|
||||
self.compute_reward()
|
||||
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
|
||||
self.reset_idx(env_ids)
|
||||
self.compute_observations() # in some cases a simulation step might be required to refresh some obs (for example body positions)
|
||||
|
||||
self.last_actions[:] = self.actions[:]
|
||||
self.last_dof_vel[:] = self.dof_vel[:]
|
||||
self.last_root_vel[:] = self.root_states[:, 7:13]
|
||||
|
||||
if self.viewer and self.enable_viewer_sync and self.debug_viz:
|
||||
self._draw_debug_vis()
|
||||
|
||||
def check_termination(self):
|
||||
""" Check if environments need to be reset
|
||||
"""
|
||||
self.reset_buf = torch.any(torch.norm(self.contact_forces[:, self.termination_contact_indices, :], dim=-1) > 1.,
|
||||
dim=1)
|
||||
self.time_out_buf = self.episode_length_buf > self.max_episode_length # no terminal reward for time-outs
|
||||
self.reset_buf |= self.time_out_buf
|
||||
|
||||
def reset_idx(self, env_ids):
|
||||
""" Reset some environments.
|
||||
Calls self._reset_dofs(env_ids), self._reset_root_states(env_ids), and self._resample_commands(env_ids)
|
||||
[Optional] calls self._update_terrain_curriculum(env_ids), self.update_command_curriculum(env_ids) and
|
||||
Logs episode info
|
||||
Resets some buffers
|
||||
|
||||
Args:
|
||||
env_ids (list[int]): List of environment ids which must be reset
|
||||
"""
|
||||
if len(env_ids) == 0:
|
||||
return
|
||||
# update curriculum
|
||||
if self.cfg.terrain.curriculum:
|
||||
self._update_terrain_curriculum(env_ids)
|
||||
# avoid updating command curriculum at each step since the maximum command is common to all envs
|
||||
if self.cfg.commands.curriculum and (self.common_step_counter % self.max_episode_length == 0):
|
||||
self.update_command_curriculum(env_ids)
|
||||
|
||||
# reset robot states
|
||||
self._reset_dofs(env_ids)
|
||||
self._reset_root_states(env_ids)
|
||||
|
||||
self._resample_commands(env_ids)
|
||||
|
||||
# reset buffers
|
||||
self.last_actions[env_ids] = 0.
|
||||
self.last_dof_vel[env_ids] = 0.
|
||||
self.feet_air_time[env_ids] = 0.
|
||||
self.episode_length_buf[env_ids] = 0
|
||||
self.reset_buf[env_ids] = 1
|
||||
# fill extras
|
||||
self.extras["episode"] = {}
|
||||
for key in self.episode_sums.keys():
|
||||
self.extras["episode"]['rew_' + key] = torch.mean(
|
||||
self.episode_sums[key][env_ids]) / self.max_episode_length_s
|
||||
self.episode_sums[key][env_ids] = 0.
|
||||
# log additional curriculum info
|
||||
if self.cfg.terrain.curriculum:
|
||||
self.extras["episode"]["terrain_level"] = torch.mean(self.terrain_levels.float())
|
||||
if self.cfg.commands.curriculum:
|
||||
self.extras["episode"]["max_command_x"] = self.command_ranges["lin_vel_x"][1]
|
||||
# send timeout info to the algorithm
|
||||
if self.cfg.env.send_timeouts:
|
||||
self.extras["time_outs"] = self.time_out_buf
|
||||
|
||||
def compute_reward(self):
|
||||
""" Compute rewards
|
||||
Calls each reward function which had a non-zero scale (processed in self._prepare_reward_function())
|
||||
adds each terms to the episode sums and to the total reward
|
||||
"""
|
||||
self.rew_buf[:] = 0.
|
||||
for i in range(len(self.reward_functions)):
|
||||
name = self.reward_names[i]
|
||||
rew = self.reward_functions[i]() * self.reward_scales[name]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums[name] += rew
|
||||
if self.cfg.rewards.only_positive_rewards:
|
||||
self.rew_buf[:] = torch.clip(self.rew_buf[:], min=0.)
|
||||
# add termination reward after clipping
|
||||
if "termination" in self.reward_scales:
|
||||
rew = self._reward_termination() * self.reward_scales["termination"]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums["termination"] += rew
|
||||
|
||||
def compute_observations(self):
|
||||
""" Computes observations
|
||||
"""
|
||||
self.obs_buf = torch.cat((self.base_lin_vel * self.obs_scales.lin_vel,
|
||||
self.base_ang_vel * self.obs_scales.ang_vel,
|
||||
self.projected_gravity,
|
||||
self.commands[:, :3] * self.commands_scale,
|
||||
(self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos,
|
||||
self.dof_vel * self.obs_scales.dof_vel,
|
||||
self.actions
|
||||
), dim=-1)
|
||||
# add perceptive inputs if not blind
|
||||
if self.cfg.terrain.measure_heights:
|
||||
heights = torch.clip(self.root_states[:, 2].unsqueeze(1) - 0.5 - self.measured_heights, -1,
|
||||
1.) * self.obs_scales.height_measurements
|
||||
self.obs_buf = torch.cat((self.obs_buf, heights), dim=-1)
|
||||
# add noise if needed
|
||||
if self.add_noise:
|
||||
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
|
||||
|
||||
def create_sim(self):
|
||||
""" Creates simulation, terrain and evironments
|
||||
"""
|
||||
self.up_axis_idx = 2 # 2 for z, 1 for y -> adapt gravity accordingly
|
||||
self.sim = self.gym.create_sim(self.sim_device_id, self.graphics_device_id, self.physics_engine,
|
||||
self.sim_params)
|
||||
mesh_type = self.cfg.terrain.mesh_type
|
||||
if mesh_type in ['heightfield', 'trimesh']:
|
||||
self.terrain = Terrain(self.cfg.terrain, self.num_envs)
|
||||
if mesh_type == 'plane':
|
||||
self._create_ground_plane()
|
||||
elif mesh_type == 'heightfield':
|
||||
self._create_heightfield()
|
||||
elif mesh_type == 'trimesh':
|
||||
self._create_trimesh()
|
||||
elif mesh_type is not None:
|
||||
raise ValueError("Terrain mesh type not recognised. Allowed types are [None, plane, heightfield, trimesh]")
|
||||
self._create_envs()
|
||||
|
||||
def set_camera(self, position, lookat):
|
||||
""" Set camera position and direction
|
||||
"""
|
||||
cam_pos = gymapi.Vec3(position[0], position[1], position[2])
|
||||
cam_target = gymapi.Vec3(lookat[0], lookat[1], lookat[2])
|
||||
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
|
||||
|
||||
# ------------- Callbacks --------------
|
||||
def _process_rigid_shape_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the rigid shape properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: randomizes the friction of each environment
|
||||
|
||||
Args:
|
||||
props (List[gymapi.RigidShapeProperties]): Properties of each shape of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[List[gymapi.RigidShapeProperties]]: Modified rigid shape properties
|
||||
"""
|
||||
if self.cfg.domain_rand.randomize_friction:
|
||||
if env_id == 0:
|
||||
# prepare friction randomization
|
||||
friction_range = self.cfg.domain_rand.friction_range
|
||||
num_buckets = 64
|
||||
bucket_ids = torch.randint(0, num_buckets, (self.num_envs, 1))
|
||||
friction_buckets = torch_rand_float(friction_range[0], friction_range[1], (num_buckets, 1),
|
||||
device='cpu')
|
||||
self.friction_coeffs = friction_buckets[bucket_ids]
|
||||
|
||||
for s in range(len(props)):
|
||||
props[s].friction = self.friction_coeffs[env_id]
|
||||
return props
|
||||
|
||||
def _process_dof_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the DOF properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: stores position, velocity and torques limits defined in the URDF
|
||||
|
||||
Args:
|
||||
props (numpy.array): Properties of each DOF of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[numpy.array]: Modified DOF properties
|
||||
"""
|
||||
if env_id == 0:
|
||||
self.dof_pos_limits = torch.zeros(self.num_dof, 2, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.dof_vel_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.torque_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(len(props)):
|
||||
self.dof_pos_limits[i, 0] = props["lower"][i].item()
|
||||
self.dof_pos_limits[i, 1] = props["upper"][i].item()
|
||||
self.dof_vel_limits[i] = props["velocity"][i].item()
|
||||
self.torque_limits[i] = props["effort"][i].item()
|
||||
# soft limits
|
||||
m = (self.dof_pos_limits[i, 0] + self.dof_pos_limits[i, 1]) / 2
|
||||
r = self.dof_pos_limits[i, 1] - self.dof_pos_limits[i, 0]
|
||||
self.dof_pos_limits[i, 0] = m - 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
self.dof_pos_limits[i, 1] = m + 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
return props
|
||||
|
||||
def _process_rigid_body_props(self, props, env_id):
|
||||
# if env_id==0:
|
||||
# sum = 0
|
||||
# for i, p in enumerate(props):
|
||||
# sum += p.mass
|
||||
# print(f"Mass of body {i}: {p.mass} (before randomization)")
|
||||
# print(f"Total mass {sum} (before randomization)")
|
||||
# randomize base mass
|
||||
if self.cfg.domain_rand.randomize_base_mass:
|
||||
rng = self.cfg.domain_rand.added_mass_range
|
||||
props[0].mass += np.random.uniform(rng[0], rng[1])
|
||||
return props
|
||||
|
||||
def _post_physics_step_callback(self):
|
||||
""" Callback called before computing terminations, rewards, and observations
|
||||
Default behaviour: Compute ang vel command based on target and heading, compute measured terrain heights and randomly push robots
|
||||
"""
|
||||
#
|
||||
env_ids = (self.episode_length_buf % int(self.cfg.commands.resampling_time / self.dt) == 0).nonzero(
|
||||
as_tuple=False).flatten()
|
||||
self._resample_commands(env_ids)
|
||||
if self.cfg.commands.heading_command:
|
||||
forward = quat_apply(self.base_quat, self.forward_vec)
|
||||
heading = torch.atan2(forward[:, 1], forward[:, 0])
|
||||
self.commands[:, 2] = torch.clip(0.5 * wrap_to_pi(self.commands[:, 3] - heading), -1., 1.)
|
||||
|
||||
if self.cfg.terrain.measure_heights:
|
||||
self.measured_heights = self._get_heights()
|
||||
if self.cfg.domain_rand.push_robots and (self.common_step_counter % self.cfg.domain_rand.push_interval == 0):
|
||||
self._push_robots()
|
||||
|
||||
def _resample_commands(self, env_ids):
|
||||
""" Randommly select commands of some environments
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environments ids for which new commands are needed
|
||||
"""
|
||||
self.commands[env_ids, 0] = torch_rand_float(self.command_ranges["lin_vel_x"][0],
|
||||
self.command_ranges["lin_vel_x"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
self.commands[env_ids, 1] = torch_rand_float(self.command_ranges["lin_vel_y"][0],
|
||||
self.command_ranges["lin_vel_y"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
if self.cfg.commands.heading_command:
|
||||
self.commands[env_ids, 3] = torch_rand_float(self.command_ranges["heading"][0],
|
||||
self.command_ranges["heading"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
else:
|
||||
self.commands[env_ids, 2] = torch_rand_float(self.command_ranges["ang_vel_yaw"][0],
|
||||
self.command_ranges["ang_vel_yaw"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
|
||||
# set small commands to zero
|
||||
self.commands[env_ids, :2] *= (torch.norm(self.commands[env_ids, :2], dim=1) > 0.2).unsqueeze(1)
|
||||
|
||||
def _compute_torques(self, actions):
|
||||
""" Compute torques from actions.
|
||||
Actions can be interpreted as position or velocity targets given to a PD controller, or directly as scaled torques.
|
||||
[NOTE]: torques must have the same dimension as the number of DOFs, even if some DOFs are not actuated.
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Actions
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Torques sent to the simulation
|
||||
"""
|
||||
# pd controller
|
||||
actions_scaled = actions * self.cfg.control.action_scale
|
||||
control_type = self.cfg.control.control_type
|
||||
if control_type == "P":
|
||||
torques = self.p_gains * (
|
||||
actions_scaled + self.default_dof_pos - self.dof_pos) - self.d_gains * self.dof_vel
|
||||
elif control_type == "V":
|
||||
torques = self.p_gains * (actions_scaled - self.dof_vel) - self.d_gains * (
|
||||
self.dof_vel - self.last_dof_vel) / self.sim_params.dt
|
||||
elif control_type == "T":
|
||||
torques = actions_scaled
|
||||
else:
|
||||
raise NameError(f"Unknown controller type: {control_type}")
|
||||
return torch.clip(torques, -self.torque_limits, self.torque_limits)
|
||||
|
||||
def _reset_dofs(self, env_ids):
|
||||
""" Resets DOF position and velocities of selected environmments
|
||||
Positions are randomly selected within 0.5:1.5 x default positions.
|
||||
Velocities are set to zero.
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
self.dof_pos[env_ids] = self.default_dof_pos * torch_rand_float(0.5, 1.5, (len(env_ids), self.num_dof),
|
||||
device=self.device)
|
||||
self.dof_vel[env_ids] = 0.
|
||||
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_dof_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.dof_state),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
def _reset_root_states(self, env_ids):
|
||||
""" Resets ROOT states position and velocities of selected environmments
|
||||
Sets base position based on the curriculum
|
||||
Selects randomized base velocities within -0.5:0.5 [m/s, rad/s]
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
# base position
|
||||
if self.custom_origins:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
self.root_states[env_ids, :2] += torch_rand_float(-1., 1., (len(env_ids), 2),
|
||||
device=self.device) # xy position within 1m of the center
|
||||
else:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
# base velocities
|
||||
self.root_states[env_ids, 7:13] = torch_rand_float(-0.5, 0.5, (len(env_ids), 6),
|
||||
device=self.device) # [7:10]: lin vel, [10:13]: ang vel
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_actor_root_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.root_states),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
def _push_robots(self):
|
||||
""" Random pushes the robots. Emulates an impulse by setting a randomized base velocity.
|
||||
"""
|
||||
max_vel = self.cfg.domain_rand.max_push_vel_xy
|
||||
self.root_states[:, 7:9] = torch_rand_float(-max_vel, max_vel, (self.num_envs, 2),
|
||||
device=self.device) # lin vel x/y
|
||||
self.gym.set_actor_root_state_tensor(self.sim, gymtorch.unwrap_tensor(self.root_states))
|
||||
|
||||
def _update_terrain_curriculum(self, env_ids):
|
||||
""" Implements the game-inspired curriculum.
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): ids of environments being reset
|
||||
"""
|
||||
# Implement Terrain curriculum
|
||||
if not self.init_done:
|
||||
# don't change on initial reset
|
||||
return
|
||||
distance = torch.norm(self.root_states[env_ids, :2] - self.env_origins[env_ids, :2], dim=1)
|
||||
# robots that walked far enough progress to harder terains
|
||||
move_up = distance > self.terrain.env_length / 2
|
||||
# robots that walked less than half of their required distance go to simpler terrains
|
||||
move_down = (distance < torch.norm(self.commands[env_ids, :2],
|
||||
dim=1) * self.max_episode_length_s * 0.5) * ~move_up
|
||||
self.terrain_levels[env_ids] += 1 * move_up - 1 * move_down
|
||||
# Robots that solve the last level are sent to a random one
|
||||
self.terrain_levels[env_ids] = torch.where(self.terrain_levels[env_ids] >= self.max_terrain_level,
|
||||
torch.randint_like(self.terrain_levels[env_ids],
|
||||
self.max_terrain_level),
|
||||
torch.clip(self.terrain_levels[env_ids],
|
||||
0)) # (the minumum level is zero)
|
||||
self.env_origins[env_ids] = self.terrain_origins[self.terrain_levels[env_ids], self.terrain_types[env_ids]]
|
||||
|
||||
def update_command_curriculum(self, env_ids):
|
||||
""" Implements a curriculum of increasing commands
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): ids of environments being reset
|
||||
"""
|
||||
# If the tracking reward is above 80% of the maximum, increase the range of commands
|
||||
if torch.mean(self.episode_sums["tracking_lin_vel"][env_ids]) / self.max_episode_length > 0.8 * \
|
||||
self.reward_scales["tracking_lin_vel"]:
|
||||
self.command_ranges["lin_vel_x"][0] = np.clip(self.command_ranges["lin_vel_x"][0] - 0.5,
|
||||
-self.cfg.commands.max_curriculum, 0.)
|
||||
self.command_ranges["lin_vel_x"][1] = np.clip(self.command_ranges["lin_vel_x"][1] + 0.5, 0.,
|
||||
self.cfg.commands.max_curriculum)
|
||||
|
||||
def _get_noise_scale_vec(self, cfg):
|
||||
""" Sets a vector used to scale the noise added to the observations.
|
||||
[NOTE]: Must be adapted when changing the observations structure
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Vector of scales used to multiply a uniform distribution in [-1, 1]
|
||||
"""
|
||||
noise_vec = torch.zeros_like(self.obs_buf[0])
|
||||
self.add_noise = self.cfg.noise.add_noise
|
||||
noise_scales = self.cfg.noise.noise_scales
|
||||
noise_level = self.cfg.noise.noise_level
|
||||
noise_vec[:3] = noise_scales.lin_vel * noise_level * self.obs_scales.lin_vel
|
||||
noise_vec[3:6] = noise_scales.ang_vel * noise_level * self.obs_scales.ang_vel
|
||||
noise_vec[6:9] = noise_scales.gravity * noise_level
|
||||
noise_vec[9:12] = 0. # commands
|
||||
noise_vec[12:24] = noise_scales.dof_pos * noise_level * self.obs_scales.dof_pos
|
||||
noise_vec[24:36] = noise_scales.dof_vel * noise_level * self.obs_scales.dof_vel
|
||||
noise_vec[36:48] = 0. # previous actions
|
||||
if self.cfg.terrain.measure_heights:
|
||||
noise_vec[48:235] = noise_scales.height_measurements * noise_level * self.obs_scales.height_measurements
|
||||
return noise_vec
|
||||
|
||||
# ----------------------------------------
|
||||
def _init_buffers(self):
|
||||
""" Initialize torch tensors which will contain simulation states and processed quantities
|
||||
"""
|
||||
# get gym GPU state tensors
|
||||
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
|
||||
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
|
||||
net_contact_forces = self.gym.acquire_net_contact_force_tensor(self.sim)
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
|
||||
# create some wrapper tensors for different slices
|
||||
self.root_states = gymtorch.wrap_tensor(actor_root_state)
|
||||
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
|
||||
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
|
||||
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
|
||||
self.base_quat = self.root_states[:, 3:7]
|
||||
|
||||
self.contact_forces = gymtorch.wrap_tensor(net_contact_forces).view(self.num_envs, -1,
|
||||
3) # shape: num_envs, num_bodies, xyz axis
|
||||
|
||||
# initialize some data used later on
|
||||
self.common_step_counter = 0
|
||||
self.extras = {}
|
||||
self.noise_scale_vec = self._get_noise_scale_vec(self.cfg)
|
||||
self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat(
|
||||
(self.num_envs, 1))
|
||||
self.forward_vec = to_torch([1., 0., 0.], device=self.device).repeat((self.num_envs, 1))
|
||||
self.torques = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.p_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.d_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.last_actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.last_dof_vel = torch.zeros_like(self.dof_vel)
|
||||
self.last_root_vel = torch.zeros_like(self.root_states[:, 7:13])
|
||||
self.commands = torch.zeros(self.num_envs, self.cfg.commands.num_commands, dtype=torch.float,
|
||||
device=self.device, requires_grad=False) # x vel, y vel, yaw vel, heading
|
||||
self.commands_scale = torch.tensor([self.obs_scales.lin_vel, self.obs_scales.lin_vel, self.obs_scales.ang_vel],
|
||||
device=self.device, requires_grad=False, ) # TODO change this
|
||||
self.feet_air_time = torch.zeros(self.num_envs, self.feet_indices.shape[0], dtype=torch.float,
|
||||
device=self.device, requires_grad=False)
|
||||
self.last_contacts = torch.zeros(self.num_envs, len(self.feet_indices), dtype=torch.bool, device=self.device,
|
||||
requires_grad=False)
|
||||
self.base_lin_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
if self.cfg.terrain.measure_heights:
|
||||
self.height_points = self._init_height_points()
|
||||
self.measured_heights = 0
|
||||
|
||||
# joint positions offsets and PD gains
|
||||
self.default_dof_pos = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(self.num_dofs):
|
||||
name = self.dof_names[i]
|
||||
angle = self.cfg.init_state.default_joint_angles[name]
|
||||
self.default_dof_pos[i] = angle
|
||||
found = False
|
||||
for dof_name in self.cfg.control.stiffness.keys():
|
||||
if dof_name in name:
|
||||
self.p_gains[i] = self.cfg.control.stiffness[dof_name]
|
||||
self.d_gains[i] = self.cfg.control.damping[dof_name]
|
||||
found = True
|
||||
if not found:
|
||||
self.p_gains[i] = 0.
|
||||
self.d_gains[i] = 0.
|
||||
if self.cfg.control.control_type in ["P", "V"]:
|
||||
print(f"PD gain of joint {name} were not defined, setting them to zero")
|
||||
self.default_dof_pos = self.default_dof_pos.unsqueeze(0)
|
||||
|
||||
def _prepare_reward_function(self):
|
||||
""" Prepares a list of reward functions, whcih will be called to compute the total reward.
|
||||
Looks for self._reward_<REWARD_NAME>, where <REWARD_NAME> are names of all non zero reward scales in the cfg.
|
||||
"""
|
||||
# remove zero scales + multiply non-zero ones by dt
|
||||
for key in list(self.reward_scales.keys()):
|
||||
scale = self.reward_scales[key]
|
||||
if scale == 0:
|
||||
self.reward_scales.pop(key)
|
||||
else:
|
||||
self.reward_scales[key] *= self.dt
|
||||
# prepare list of functions
|
||||
self.reward_functions = []
|
||||
self.reward_names = []
|
||||
for name, scale in self.reward_scales.items():
|
||||
if name == "termination":
|
||||
continue
|
||||
self.reward_names.append(name)
|
||||
name = '_reward_' + name
|
||||
self.reward_functions.append(getattr(self, name))
|
||||
|
||||
# reward episode sums
|
||||
self.episode_sums = {
|
||||
name: torch.zeros(self.num_envs, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for name in self.reward_scales.keys()}
|
||||
|
||||
def _create_ground_plane(self):
|
||||
""" Adds a ground plane to the simulation, sets friction and restitution based on the cfg.
|
||||
"""
|
||||
plane_params = gymapi.PlaneParams()
|
||||
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
|
||||
plane_params.static_friction = self.cfg.terrain.static_friction
|
||||
plane_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
plane_params.restitution = self.cfg.terrain.restitution
|
||||
self.gym.add_ground(self.sim, plane_params)
|
||||
|
||||
def _create_heightfield(self):
|
||||
""" Adds a heightfield terrain to the simulation, sets parameters based on the cfg.
|
||||
"""
|
||||
hf_params = gymapi.HeightFieldParams()
|
||||
hf_params.column_scale = self.terrain.cfg.horizontal_scale
|
||||
hf_params.row_scale = self.terrain.cfg.horizontal_scale
|
||||
hf_params.vertical_scale = self.terrain.cfg.vertical_scale
|
||||
hf_params.nbRows = self.terrain.tot_cols
|
||||
hf_params.nbColumns = self.terrain.tot_rows
|
||||
hf_params.transform.p.x = -self.terrain.cfg.border_size
|
||||
hf_params.transform.p.y = -self.terrain.cfg.border_size
|
||||
hf_params.transform.p.z = 0.0
|
||||
hf_params.static_friction = self.cfg.terrain.static_friction
|
||||
hf_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
hf_params.restitution = self.cfg.terrain.restitution
|
||||
|
||||
self.gym.add_heightfield(self.sim, self.terrain.heightsamples, hf_params)
|
||||
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows,
|
||||
self.terrain.tot_cols).to(self.device)
|
||||
|
||||
def _create_trimesh(self):
|
||||
""" Adds a triangle mesh terrain to the simulation, sets parameters based on the cfg.
|
||||
# """
|
||||
tm_params = gymapi.TriangleMeshParams()
|
||||
tm_params.nb_vertices = self.terrain.vertices.shape[0]
|
||||
tm_params.nb_triangles = self.terrain.triangles.shape[0]
|
||||
|
||||
tm_params.transform.p.x = -self.terrain.cfg.border_size
|
||||
tm_params.transform.p.y = -self.terrain.cfg.border_size
|
||||
tm_params.transform.p.z = 0.0
|
||||
tm_params.static_friction = self.cfg.terrain.static_friction
|
||||
tm_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
tm_params.restitution = self.cfg.terrain.restitution
|
||||
self.gym.add_triangle_mesh(self.sim, self.terrain.vertices.flatten(order='C'),
|
||||
self.terrain.triangles.flatten(order='C'), tm_params)
|
||||
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows,
|
||||
self.terrain.tot_cols).to(self.device)
|
||||
|
||||
def _create_envs(self):
|
||||
""" Creates environments:
|
||||
1. loads the robot URDF/MJCF asset,
|
||||
2. For each environment
|
||||
2.1 creates the environment,
|
||||
2.2 calls DOF and Rigid shape properties callbacks,
|
||||
2.3 create actor with these properties and add them to the env
|
||||
3. Store indices of different bodies of the robot
|
||||
"""
|
||||
asset_path = self.cfg.asset.file.format(LOCO_MANI_GYM_ROOT_DIR=LOCO_MANI_GYM_ROOT_DIR)
|
||||
asset_root = os.path.dirname(asset_path)
|
||||
asset_file = os.path.basename(asset_path)
|
||||
|
||||
asset_options = gymapi.AssetOptions()
|
||||
asset_options.default_dof_drive_mode = self.cfg.asset.default_dof_drive_mode
|
||||
asset_options.collapse_fixed_joints = self.cfg.asset.collapse_fixed_joints
|
||||
asset_options.replace_cylinder_with_capsule = self.cfg.asset.replace_cylinder_with_capsule
|
||||
asset_options.flip_visual_attachments = self.cfg.asset.flip_visual_attachments
|
||||
asset_options.fix_base_link = self.cfg.asset.fix_base_link
|
||||
asset_options.density = self.cfg.asset.density
|
||||
asset_options.angular_damping = self.cfg.asset.angular_damping
|
||||
asset_options.linear_damping = self.cfg.asset.linear_damping
|
||||
asset_options.max_angular_velocity = self.cfg.asset.max_angular_velocity
|
||||
asset_options.max_linear_velocity = self.cfg.asset.max_linear_velocity
|
||||
asset_options.armature = self.cfg.asset.armature
|
||||
asset_options.thickness = self.cfg.asset.thickness
|
||||
asset_options.disable_gravity = self.cfg.asset.disable_gravity
|
||||
|
||||
robot_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
|
||||
self.num_dof = self.gym.get_asset_dof_count(robot_asset)
|
||||
self.num_bodies = self.gym.get_asset_rigid_body_count(robot_asset)
|
||||
dof_props_asset = self.gym.get_asset_dof_properties(robot_asset)
|
||||
rigid_shape_props_asset = self.gym.get_asset_rigid_shape_properties(robot_asset)
|
||||
|
||||
# save body names from the asset
|
||||
body_names = self.gym.get_asset_rigid_body_names(robot_asset)
|
||||
self.dof_names = self.gym.get_asset_dof_names(robot_asset)
|
||||
self.num_bodies = len(body_names)
|
||||
self.num_dofs = len(self.dof_names)
|
||||
feet_names = [s for s in body_names if self.cfg.asset.foot_name in s]
|
||||
penalized_contact_names = []
|
||||
for name in self.cfg.asset.penalize_contacts_on:
|
||||
penalized_contact_names.extend([s for s in body_names if name in s])
|
||||
termination_contact_names = []
|
||||
for name in self.cfg.asset.terminate_after_contacts_on:
|
||||
termination_contact_names.extend([s for s in body_names if name in s])
|
||||
|
||||
base_init_state_list = self.cfg.init_state.pos + self.cfg.init_state.rot + self.cfg.init_state.lin_vel + self.cfg.init_state.ang_vel
|
||||
self.base_init_state = to_torch(base_init_state_list, device=self.device, requires_grad=False)
|
||||
start_pose = gymapi.Transform()
|
||||
start_pose.p = gymapi.Vec3(*self.base_init_state[:3])
|
||||
|
||||
self._get_env_origins()
|
||||
env_lower = gymapi.Vec3(0., 0., 0.)
|
||||
env_upper = gymapi.Vec3(0., 0., 0.)
|
||||
self.actor_handles = []
|
||||
self.envs = []
|
||||
for i in range(self.num_envs):
|
||||
# create env instance
|
||||
env_handle = self.gym.create_env(self.sim, env_lower, env_upper, int(np.sqrt(self.num_envs)))
|
||||
pos = self.env_origins[i].clone()
|
||||
pos[:2] += torch_rand_float(-1., 1., (2, 1), device=self.device).squeeze(1)
|
||||
start_pose.p = gymapi.Vec3(*pos)
|
||||
|
||||
rigid_shape_props = self._process_rigid_shape_props(rigid_shape_props_asset, i)
|
||||
self.gym.set_asset_rigid_shape_properties(robot_asset, rigid_shape_props)
|
||||
actor_handle = self.gym.create_actor(env_handle, robot_asset, start_pose, self.cfg.asset.name, i,
|
||||
self.cfg.asset.self_collisions, 0)
|
||||
dof_props = self._process_dof_props(dof_props_asset, i)
|
||||
self.gym.set_actor_dof_properties(env_handle, actor_handle, dof_props)
|
||||
body_props = self.gym.get_actor_rigid_body_properties(env_handle, actor_handle)
|
||||
body_props = self._process_rigid_body_props(body_props, i)
|
||||
self.gym.set_actor_rigid_body_properties(env_handle, actor_handle, body_props, recomputeInertia=True)
|
||||
self.envs.append(env_handle)
|
||||
self.actor_handles.append(actor_handle)
|
||||
|
||||
self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
|
||||
for i in range(len(feet_names)):
|
||||
self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0],
|
||||
feet_names[i])
|
||||
|
||||
self.penalised_contact_indices = torch.zeros(len(penalized_contact_names), dtype=torch.long, device=self.device,
|
||||
requires_grad=False)
|
||||
for i in range(len(penalized_contact_names)):
|
||||
self.penalised_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0],
|
||||
self.actor_handles[0],
|
||||
penalized_contact_names[i])
|
||||
|
||||
self.termination_contact_indices = torch.zeros(len(termination_contact_names), dtype=torch.long,
|
||||
device=self.device, requires_grad=False)
|
||||
for i in range(len(termination_contact_names)):
|
||||
self.termination_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0],
|
||||
self.actor_handles[0],
|
||||
termination_contact_names[i])
|
||||
|
||||
def _get_env_origins(self):
|
||||
""" Sets environment origins. On rough terrain the origins are defined by the terrain platforms.
|
||||
Otherwise create a grid.
|
||||
"""
|
||||
if self.cfg.terrain.mesh_type in ["heightfield", "trimesh"]:
|
||||
self.custom_origins = True
|
||||
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
|
||||
# put robots at the origins defined by the terrain
|
||||
max_init_level = self.cfg.terrain.max_init_terrain_level
|
||||
if not self.cfg.terrain.curriculum: max_init_level = self.cfg.terrain.num_rows - 1
|
||||
self.terrain_levels = torch.randint(0, max_init_level + 1, (self.num_envs,), device=self.device)
|
||||
self.terrain_types = torch.div(torch.arange(self.num_envs, device=self.device),
|
||||
(self.num_envs / self.cfg.terrain.num_cols), rounding_mode='floor').to(
|
||||
torch.long)
|
||||
self.max_terrain_level = self.cfg.terrain.num_rows
|
||||
self.terrain_origins = torch.from_numpy(self.terrain.env_origins).to(self.device).to(torch.float)
|
||||
self.env_origins[:] = self.terrain_origins[self.terrain_levels, self.terrain_types]
|
||||
else:
|
||||
self.custom_origins = False
|
||||
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
|
||||
# create a grid of robots
|
||||
num_cols = np.floor(np.sqrt(self.num_envs))
|
||||
num_rows = np.ceil(self.num_envs / num_cols)
|
||||
xx, yy = torch.meshgrid(torch.arange(num_rows), torch.arange(num_cols))
|
||||
spacing = self.cfg.env.env_spacing
|
||||
self.env_origins[:, 0] = spacing * xx.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 1] = spacing * yy.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 2] = 0.
|
||||
|
||||
def _parse_cfg(self, cfg):
|
||||
self.dt = self.cfg.control.decimation * self.sim_params.dt
|
||||
self.obs_scales = self.cfg.normalization.obs_scales
|
||||
self.reward_scales = class_to_dict(self.cfg.rewards.scales)
|
||||
self.command_ranges = class_to_dict(self.cfg.commands.ranges)
|
||||
if self.cfg.terrain.mesh_type not in ['heightfield', 'trimesh']:
|
||||
self.cfg.terrain.curriculum = False
|
||||
self.max_episode_length_s = self.cfg.env.episode_length_s
|
||||
self.max_episode_length = np.ceil(self.max_episode_length_s / self.dt)
|
||||
|
||||
self.cfg.domain_rand.push_interval = np.ceil(self.cfg.domain_rand.push_interval_s / self.dt)
|
||||
|
||||
def _draw_debug_vis(self):
|
||||
""" Draws visualizations for dubugging (slows down simulation a lot).
|
||||
Default behaviour: draws height measurement points
|
||||
"""
|
||||
# draw height lines
|
||||
if not self.terrain.cfg.measure_heights:
|
||||
return
|
||||
self.gym.clear_lines(self.viewer)
|
||||
self.gym.refresh_rigid_body_state_tensor(self.sim)
|
||||
sphere_geom = gymutil.WireframeSphereGeometry(0.02, 4, 4, None, color=(1, 1, 0))
|
||||
for i in range(self.num_envs):
|
||||
base_pos = (self.root_states[i, :3]).cpu().numpy()
|
||||
heights = self.measured_heights[i].cpu().numpy()
|
||||
height_points = quat_apply_yaw(self.base_quat[i].repeat(heights.shape[0]),
|
||||
self.height_points[i]).cpu().numpy()
|
||||
for j in range(heights.shape[0]):
|
||||
x = height_points[j, 0] + base_pos[0]
|
||||
y = height_points[j, 1] + base_pos[1]
|
||||
z = heights[j]
|
||||
sphere_pose = gymapi.Transform(gymapi.Vec3(x, y, z), r=None)
|
||||
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], sphere_pose)
|
||||
|
||||
def _init_height_points(self):
|
||||
""" Returns points at which the height measurments are sampled (in base frame)
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Tensor of shape (num_envs, self.num_height_points, 3)
|
||||
"""
|
||||
y = torch.tensor(self.cfg.terrain.measured_points_y, device=self.device, requires_grad=False)
|
||||
x = torch.tensor(self.cfg.terrain.measured_points_x, device=self.device, requires_grad=False)
|
||||
grid_x, grid_y = torch.meshgrid(x, y)
|
||||
|
||||
self.num_height_points = grid_x.numel()
|
||||
points = torch.zeros(self.num_envs, self.num_height_points, 3, device=self.device, requires_grad=False)
|
||||
points[:, :, 0] = grid_x.flatten()
|
||||
points[:, :, 1] = grid_y.flatten()
|
||||
return points
|
||||
|
||||
def _get_heights(self, env_ids=None):
|
||||
""" Samples heights of the terrain at required points around each robot.
|
||||
The points are offset by the base's position and rotated by the base's yaw
|
||||
|
||||
Args:
|
||||
env_ids (List[int], optional): Subset of environments for which to return the heights. Defaults to None.
|
||||
|
||||
Raises:
|
||||
NameError: [description]
|
||||
|
||||
Returns:
|
||||
[type]: [description]
|
||||
"""
|
||||
if self.cfg.terrain.mesh_type == 'plane':
|
||||
return torch.zeros(self.num_envs, self.num_height_points, device=self.device, requires_grad=False)
|
||||
elif self.cfg.terrain.mesh_type == 'none':
|
||||
raise NameError("Can't measure height with terrain mesh type 'none'")
|
||||
|
||||
if env_ids:
|
||||
points = quat_apply_yaw(self.base_quat[env_ids].repeat(1, self.num_height_points),
|
||||
self.height_points[env_ids]) + (self.root_states[env_ids, :3]).unsqueeze(1)
|
||||
else:
|
||||
points = quat_apply_yaw(self.base_quat.repeat(1, self.num_height_points), self.height_points) + (
|
||||
self.root_states[:, :3]).unsqueeze(1)
|
||||
|
||||
points += self.terrain.cfg.border_size
|
||||
points = (points / self.terrain.cfg.horizontal_scale).long()
|
||||
px = points[:, :, 0].view(-1)
|
||||
py = points[:, :, 1].view(-1)
|
||||
px = torch.clip(px, 0, self.height_samples.shape[0] - 2)
|
||||
py = torch.clip(py, 0, self.height_samples.shape[1] - 2)
|
||||
|
||||
heights1 = self.height_samples[px, py]
|
||||
heights2 = self.height_samples[px + 1, py]
|
||||
heights3 = self.height_samples[px, py + 1]
|
||||
heights = torch.min(heights1, heights2)
|
||||
heights = torch.min(heights, heights3)
|
||||
|
||||
return heights.view(self.num_envs, -1) * self.terrain.cfg.vertical_scale
|
||||
|
||||
# ------------ reward functions----------------
|
||||
def _reward_lin_vel_z(self):
|
||||
# Penalize z axis base linear velocity
|
||||
return torch.square(self.base_lin_vel[:, 2])
|
||||
|
||||
def _reward_ang_vel_xy(self):
|
||||
# Penalize xy axes base angular velocity
|
||||
return torch.sum(torch.square(self.base_ang_vel[:, :2]), dim=1)
|
||||
|
||||
def _reward_orientation(self):
|
||||
# Penalize non flat base orientation
|
||||
return torch.sum(torch.square(self.projected_gravity[:, :2]), dim=1)
|
||||
|
||||
def _reward_base_height(self):
|
||||
# Penalize base height away from target
|
||||
base_height = torch.mean(self.root_states[:, 2].unsqueeze(1) - self.measured_heights, dim=1)
|
||||
return torch.square(base_height - self.cfg.rewards.base_height_target)
|
||||
|
||||
def _reward_torques(self):
|
||||
# Penalize torques
|
||||
return torch.sum(torch.square(self.torques), dim=1)
|
||||
|
||||
def _reward_dof_vel(self):
|
||||
# Penalize dof velocities
|
||||
return torch.sum(torch.square(self.dof_vel), dim=1)
|
||||
|
||||
def _reward_dof_acc(self):
|
||||
# Penalize dof accelerations
|
||||
return torch.sum(torch.square((self.last_dof_vel - self.dof_vel) / self.dt), dim=1)
|
||||
|
||||
def _reward_action_rate(self):
|
||||
# Penalize changes in actions
|
||||
return torch.sum(torch.square(self.last_actions - self.actions), dim=1)
|
||||
|
||||
def _reward_collision(self):
|
||||
# Penalize collisions on selected bodies
|
||||
return torch.sum(1. * (torch.norm(self.contact_forces[:, self.penalised_contact_indices, :], dim=-1) > 0.1),
|
||||
dim=1)
|
||||
|
||||
def _reward_termination(self):
|
||||
# Terminal reward / penalty
|
||||
return self.reset_buf * ~self.time_out_buf
|
||||
|
||||
def _reward_dof_pos_limits(self):
|
||||
# Penalize dof positions too close to the limit
|
||||
out_of_limits = -(self.dof_pos - self.dof_pos_limits[:, 0]).clip(max=0.) # lower limit
|
||||
out_of_limits += (self.dof_pos - self.dof_pos_limits[:, 1]).clip(min=0.)
|
||||
return torch.sum(out_of_limits, dim=1)
|
||||
|
||||
def _reward_dof_vel_limits(self):
|
||||
# Penalize dof velocities too close to the limit
|
||||
# clip to max error = 1 rad/s per joint to avoid huge penalties
|
||||
return torch.sum(
|
||||
(torch.abs(self.dof_vel) - self.dof_vel_limits * self.cfg.rewards.soft_dof_vel_limit).clip(min=0., max=1.),
|
||||
dim=1)
|
||||
|
||||
def _reward_torque_limits(self):
|
||||
# penalize torques too close to the limit
|
||||
return torch.sum(
|
||||
(torch.abs(self.torques) - self.torque_limits * self.cfg.rewards.soft_torque_limit).clip(min=0.), dim=1)
|
||||
|
||||
def _reward_tracking_lin_vel(self):
|
||||
# Tracking of linear velocity commands (xy axes)
|
||||
lin_vel_error = torch.sum(torch.square(self.commands[:, :2] - self.base_lin_vel[:, :2]), dim=1)
|
||||
return torch.exp(-lin_vel_error / self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_tracking_ang_vel(self):
|
||||
# Tracking of angular velocity commands (yaw)
|
||||
ang_vel_error = torch.square(self.commands[:, 2] - self.base_ang_vel[:, 2])
|
||||
return torch.exp(-ang_vel_error / self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_feet_air_time(self):
|
||||
# Reward long steps
|
||||
# Need to filter the contacts because the contact reporting of PhysX is unreliable on meshes
|
||||
contact = self.contact_forces[:, self.feet_indices, 2] > 1.
|
||||
contact_filt = torch.logical_or(contact, self.last_contacts)
|
||||
self.last_contacts = contact
|
||||
first_contact = (self.feet_air_time > 0.) * contact_filt
|
||||
self.feet_air_time += self.dt
|
||||
rew_airTime = torch.sum((self.feet_air_time - 0.5) * first_contact,
|
||||
dim=1) # reward only on first contact with the ground
|
||||
rew_airTime *= torch.norm(self.commands[:, :2], dim=1) > 0.1 # no reward for zero command
|
||||
self.feet_air_time *= ~contact_filt
|
||||
return rew_airTime
|
||||
|
||||
def _reward_stumble(self):
|
||||
# Penalize feet hitting vertical surfaces
|
||||
return torch.any(torch.norm(self.contact_forces[:, self.feet_indices, :2], dim=2) > \
|
||||
5 * torch.abs(self.contact_forces[:, self.feet_indices, 2]), dim=1)
|
||||
|
||||
def _reward_stand_still(self):
|
||||
# Penalize motion at zero commands
|
||||
return torch.sum(torch.abs(self.dof_pos - self.default_dof_pos), dim=1) * (
|
||||
torch.norm(self.commands[:, :2], dim=1) < 0.1)
|
||||
|
||||
def _reward_feet_contact_forces(self):
|
||||
# penalize high contact forces
|
||||
return torch.sum((torch.norm(self.contact_forces[:, self.feet_indices, :],
|
||||
dim=-1) - self.cfg.rewards.max_contact_force).clip(min=0.), dim=1)
|
||||
197
legged_gym/envs/fh/fh_config.py
Normal file
197
legged_gym/envs/fh/fh_config.py
Normal file
@@ -0,0 +1,197 @@
|
||||
from legged_gym.envs.base.legged_robot_config_fh import LeggedRobotCfg_FH, LeggedRobotCfgPPO_FH
|
||||
|
||||
|
||||
class FHRoughCfg(LeggedRobotCfg_FH):
|
||||
class env(LeggedRobotCfg_FH.env):
|
||||
num_envs = 4096
|
||||
num_actions = 14
|
||||
num_observations = 69 #+ 187
|
||||
|
||||
class commands(LeggedRobotCfg_FH):
|
||||
curriculum = False
|
||||
max_curriculum = 1.5
|
||||
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
|
||||
resampling_time = 10. # time before command are changed[s]
|
||||
heading_command = False # if true: compute ang vel command from heading error
|
||||
|
||||
class ranges:
|
||||
# lin_vel_x = [-1.5, 1.5] # min max [m/s]
|
||||
# lin_vel_y = [-1.5, 1.5] # min max [m/s]
|
||||
# ang_vel_yaw = [-1.0, 1.0] # min max [rad/s]
|
||||
# heading = [-3.14, 3.14]
|
||||
|
||||
lin_vel_x = [-1.5, 1.5] # min max [m/s]
|
||||
lin_vel_y = [-1.5, 1.5] # min max [m/s]
|
||||
ang_vel_yaw = [-1.0, 1.0] # min max [rad/s]
|
||||
heading = [-3.14, 3.14]
|
||||
|
||||
class terrain(LeggedRobotCfg_FH.terrain):
|
||||
mesh_type = 'plane' # "heightfield" # none, plane, heightfield or trimesh
|
||||
horizontal_scale = 0.1 # [m]
|
||||
vertical_scale = 0.005 # [m]
|
||||
border_size = 25 # [m]
|
||||
curriculum = False
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# rough terrain only:
|
||||
measure_heights = False
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,
|
||||
0.8] # 1mx1.6m rectangle (without center line)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows = 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
|
||||
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
|
||||
# trimesh only:
|
||||
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces
|
||||
|
||||
class init_state(LeggedRobotCfg_FH.init_state):
|
||||
pos = [0.0, 0.0, 0.65] # x,y,z [m]
|
||||
default_joint_angles = { # = target angles [rad] when action = 0.0
|
||||
'LF_Joint': 0.05, # [rad]
|
||||
'RF_Joint': 0.05, # [rad]
|
||||
'LR_Joint': 0.0, # [rad]
|
||||
'RR_Joint': 0.0, # [rad]
|
||||
|
||||
'L_hip_pitch_Joint': 0.0, # [rad]
|
||||
'R_hip_pitch_Joint': 0.0, # [rad]
|
||||
'L_behind_wheel_Joint': 0.0,
|
||||
'R_behind_wheel_Joint': 0.0,
|
||||
|
||||
'L_knee_pitch_Joint': 0.0, # [rad]
|
||||
'R_knee_pitch_Joint': 0.0, # [rad]
|
||||
'L_knee_behind_Joint': 0.0,
|
||||
'R_knee_behind_Joint': 0.0,
|
||||
|
||||
'L_front_wheel_Joint': 0.0,
|
||||
'R_front_wheel_Joint': 0.0,
|
||||
|
||||
}
|
||||
init_joint_angles = { # = target angles [rad] when action = 0.0
|
||||
'LF_Joint': 0.0, # [rad]
|
||||
'RF_Joint': 0.0, # [rad]
|
||||
'LR_Joint': 0.0, # [rad]
|
||||
'RR_Joint': 0.0, # [rad]
|
||||
|
||||
'L_hip_pitch_Joint': 0.0, # [rad]
|
||||
'R_hip_pitch_Joint': 0.0, # [rad]
|
||||
'L_behind_wheel_Joint': 0.0,
|
||||
'R_behind_wheel_Joint': 0.0,
|
||||
|
||||
'L_knee_pitch_Joint': 0.0, # [rad]
|
||||
'R_knee_pitch_Joint': 0.0, # [rad]
|
||||
'L_knee_behind_Joint': 0.0,
|
||||
'R_knee_behind_Joint': 0.0,
|
||||
|
||||
'L_front_wheel_Joint': 0.0,
|
||||
'R_front_wheel_Joint': 0.0,
|
||||
}
|
||||
|
||||
class rewards:
|
||||
class scales:
|
||||
termination = -0.8
|
||||
tracking_lin_vel = 3.0
|
||||
tracking_ang_vel = 1.5
|
||||
lin_vel_z = -0.1
|
||||
ang_vel_xy = -0.05
|
||||
orientation = 4
|
||||
torques = -0.0001
|
||||
dof_vel = -1e-7
|
||||
dof_acc = -1e-7
|
||||
base_height = -0.5
|
||||
feet_air_time = 0
|
||||
collision = -0.1
|
||||
feet_stumble = -0.1
|
||||
action_rate = -0.0002
|
||||
stand_still = -0.01
|
||||
dof_pos_limits = -0.9
|
||||
arm_pos = -0.
|
||||
hip_action_l2 = -0.1
|
||||
|
||||
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
|
||||
tracking_sigma = 0.4 # tracking reward = exp(-error^2/sigma)
|
||||
soft_dof_pos_limit = 0.9 # percentage of urdf limits, values above this limit are penalized
|
||||
soft_dof_vel_limit = 0.9
|
||||
soft_torque_limit = 1.
|
||||
base_height_target = 0.60
|
||||
max_contact_force = 200. # forces above this value are penalized
|
||||
|
||||
class control(LeggedRobotCfg_FH.control):
|
||||
# PD Drive parameters:
|
||||
control_type = 'P'
|
||||
|
||||
stiffness = {'LF_Joint': 350.,
|
||||
'RF_Joint': 350.,
|
||||
'LR_Joint': 350.,
|
||||
'RR_Joint': 350.,
|
||||
|
||||
'L_hip_pitch_Joint': 350., # [rad]
|
||||
'R_hip_pitch_Joint': 350., # [rad]
|
||||
'L_behind_wheel_Joint': 80.,
|
||||
'R_behind_wheel_Joint': 80.,
|
||||
|
||||
'L_knee_pitch_Joint': 350., # [rad]
|
||||
'R_knee_pitch_Joint': 350., # [rad]
|
||||
'L_knee_behind_Joint': 400.,
|
||||
'R_knee_behind_Joint': 400.,
|
||||
|
||||
'L_front_wheel_Joint': 80.,
|
||||
'R_front_wheel_Joint': 80.,
|
||||
} # [N*m/rad]
|
||||
|
||||
damping = {'LF_Joint': 12.,
|
||||
'RF_Joint': 12.,
|
||||
'LR_Joint': 12.,
|
||||
'RR_Joint': 12.,
|
||||
|
||||
'L_hip_pitch_Joint': 12., # [rad]
|
||||
'R_hip_pitch_Joint': 12., # [rad]
|
||||
'L_behind_wheel_Joint': 2.,
|
||||
'R_behind_wheel_Joint': 2.,
|
||||
|
||||
'L_knee_pitch_Joint': 12., # [rad]
|
||||
'R_knee_pitch_Joint': 12., # [rad]
|
||||
'L_knee_behind_Joint': 12.,
|
||||
'R_knee_behind_Joint': 12.,
|
||||
|
||||
'L_front_wheel_Joint': 2.,
|
||||
'R_front_wheel_Joint': 2.,
|
||||
} # [N*m*s/rad]
|
||||
# action scale: target angle = actionScale * action + defaultAngle
|
||||
action_scale = 0.25
|
||||
# decimation: Number of control action updates @ sim DT per policy DT
|
||||
decimation = 4
|
||||
|
||||
class asset(LeggedRobotCfg_FH.asset):
|
||||
file = '/home/hivecore/HiveCode_rl_gym/resources/robots/FHrobot/urdf/FHrobot.urdf' #######
|
||||
name = "FHrobot"
|
||||
arm_name = ""
|
||||
foot_name = "wheel"
|
||||
wheel_name = ["wheel"]
|
||||
penalize_contacts_on = ["body_Link"]
|
||||
terminate_after_contacts_on = []
|
||||
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter "base","calf","hip","thigh"
|
||||
replace_cylinder_with_capsule = False
|
||||
flip_visual_attachments = False
|
||||
|
||||
|
||||
class FHRoughCfgPPO(LeggedRobotCfgPPO_FH):
|
||||
class algorithm(LeggedRobotCfgPPO_FH.algorithm):
|
||||
entropy_coef = 0.003
|
||||
|
||||
class runner(LeggedRobotCfgPPO_FH.runner):
|
||||
run_name = ''
|
||||
experiment_name = 'fh'
|
||||
resume = False
|
||||
num_steps_per_env = 48 # per iteration
|
||||
max_iterations = 10000
|
||||
load_run = '' #''
|
||||
checkpoint = 0
|
||||
|
||||
|
||||
1035
legged_gym/envs/fh/fh_env.py
Normal file
1035
legged_gym/envs/fh/fh_env.py
Normal file
File diff suppressed because it is too large
Load Diff
244
legged_gym/envs/proto/legged_robot_config_pro.py
Normal file
244
legged_gym/envs/proto/legged_robot_config_pro.py
Normal file
@@ -0,0 +1,244 @@
|
||||
from legged_gym.envs.base.base_config import BaseConfig
|
||||
|
||||
class LeggedRobotCfg_PRO(BaseConfig):
|
||||
class env:
|
||||
num_envs = 4096
|
||||
num_observations = 235
|
||||
num_privileged_obs = None # if not None a priviledge_obs_buf will be returned by step() (critic obs for assymetric training). None is returned otherwise
|
||||
num_actions = 12
|
||||
env_spacing = 3. # not used with heightfields/trimeshes
|
||||
send_timeouts = True # send time out information to the algorithm
|
||||
episode_length_s = 20 # episode length in seconds
|
||||
|
||||
class terrain:
|
||||
mesh_type = 'trimesh' # "heightfield" # none, plane, heightfield or trimesh
|
||||
horizontal_scale = 0.1 # [m]
|
||||
vertical_scale = 0.005 # [m]
|
||||
border_size = 25 # [m]
|
||||
curriculum = True
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# rough terrain only:
|
||||
measure_heights = True
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,
|
||||
0.8] # 1mx1.6m rectangle (without center line)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows = 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
|
||||
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
|
||||
# trimesh only:
|
||||
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces
|
||||
|
||||
'''mesh_type = 'trimesh' # 使用三角网格表示复杂地形
|
||||
horizontal_scale = 0.1 # 水平缩放 (米/单位)
|
||||
vertical_scale = 0.005 # 垂直缩放 (米/单位),控制地形起伏程度
|
||||
border_size = 25. # 边界大小 (米)
|
||||
curriculum = False # 关闭课程学习,从一开始就在复杂地形上训练
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# 地形类型比例 [平滑斜坡, 粗糙斜坡, 上楼梯, 下楼梯, 离散障碍]
|
||||
# 这里我们主要使用粗糙斜坡来创建略有高低不平的地形
|
||||
terrain_proportions = [0.0, 1.0, 0.0, 0.0, 0.0] # 100% 粗糙斜坡
|
||||
# 地形测量参数
|
||||
measure_heights = True
|
||||
measured_points_x = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5] # 1m 线 (共11个点)
|
||||
measured_points_y = [-0.25, -0.125, 0., 0.125, 0.25] # 5条线
|
||||
|
||||
# 地形生成参数 - 针对粗糙地形
|
||||
difficulty_scale = 1.0 # 难度缩放因子
|
||||
roughness = 0.3 # 粗糙度参数 (0-1),控制地形起伏程度
|
||||
# 对于略有高低不平的地形,使用中等粗糙度'''
|
||||
|
||||
class commands:
|
||||
curriculum = False
|
||||
max_curriculum = 1.
|
||||
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
|
||||
resampling_time = 10. # time before command are changed[s]
|
||||
heading_command = True # if true: compute ang vel command from heading error
|
||||
|
||||
class ranges:
|
||||
lin_vel_x = [-1.0, 1.0] # min max [m/s]
|
||||
lin_vel_y = [-1.0, 1.0] # min max [m/s]
|
||||
ang_vel_yaw = [-1, 1] # min max [rad/s]
|
||||
heading = [-3.14, 3.14]
|
||||
|
||||
class init_state:
|
||||
pos = [0.0, 0.0, 1.] # x,y,z [m]
|
||||
rot = [0.0, 0.0, 0.0, 1.0] # x,y,z,w [quat]
|
||||
lin_vel = [0.0, 0.0, 0.0] # x,y,z [m/s]
|
||||
ang_vel = [0.0, 0.0, 0.0] # x,y,z [rad/s]
|
||||
default_joint_angles = { # target angles when action = 0.0
|
||||
"joint_a": 0.,
|
||||
"joint_b": 0.}
|
||||
|
||||
class control:
|
||||
control_type = 'P' # P: position, V: velocity, T: torques
|
||||
# PD Drive parameters:
|
||||
stiffness = {'joint_a': 10.0, 'joint_b': 15.} # [N*m/rad]
|
||||
damping = {'joint_a': 1.0, 'joint_b': 1.5} # [N*m*s/rad]
|
||||
# action scale: target angle = actionScale * action + defaultAngle
|
||||
action_scale = 0.5
|
||||
# decimation: Number of control action updates @ sim DT per policy DT
|
||||
decimation = 4
|
||||
wheel_speed = 1
|
||||
arm_pos = 0
|
||||
|
||||
class asset:
|
||||
file = ""
|
||||
name = "legged_robot" # actor name
|
||||
arm_name = ""
|
||||
foot_name = "None" # name of the feet bodies, used to index body state and contact force tensors
|
||||
penalize_contacts_on = []
|
||||
terminate_after_contacts_on = []
|
||||
disable_gravity = False
|
||||
collapse_fixed_joints = True # merge bodies connected by fixed joints. Specific fixed joints can be kept by adding " <... dont_collapse="true">
|
||||
fix_base_link = False # fixe the base of the robot
|
||||
default_dof_drive_mode = 3 # see GymDofDriveModeFlags (0 is none, 1 is pos tgt, 2 is vel tgt, 3 effort)
|
||||
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
|
||||
replace_cylinder_with_capsule = True # replace collision cylinders with capsules, leads to faster/more stable simulation
|
||||
flip_visual_attachments = False # Some .obj meshes must be flipped from y-up to z-up
|
||||
|
||||
density = 0.001
|
||||
angular_damping = 0.
|
||||
linear_damping = 0.
|
||||
max_angular_velocity = 1000.
|
||||
max_linear_velocity = 1000.
|
||||
armature = 0.
|
||||
thickness = 0.01
|
||||
|
||||
class domain_rand:
|
||||
randomize_friction = True
|
||||
friction_range = [0.5, 1.25]
|
||||
randomize_base_mass = False
|
||||
added_mass_range = [-1., 1.]
|
||||
push_robots = True
|
||||
push_interval_s = 15
|
||||
max_push_vel_xy = 1.
|
||||
|
||||
class rewards:
|
||||
class scales:
|
||||
termination = -0.0
|
||||
tracking_lin_vel = 1.0
|
||||
tracking_ang_vel = 0.5
|
||||
lin_vel_z = -2.0
|
||||
ang_vel_xy = -0.05
|
||||
orientation = -0.
|
||||
torques = -0.00001
|
||||
dof_vel = -0.
|
||||
dof_acc = -2.5e-7
|
||||
base_height = -0.
|
||||
feet_air_time = 1.0
|
||||
collision = -1.
|
||||
feet_stumble = -0.0
|
||||
action_rate = -0.01
|
||||
stand_still = -0.
|
||||
|
||||
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
|
||||
tracking_sigma = 0.25 # tracking reward = exp(-error^2/sigma)
|
||||
soft_dof_pos_limit = 1. # percentage of urdf limits, values above this limit are penalized
|
||||
soft_dof_vel_limit = 1.
|
||||
soft_torque_limit = 1.
|
||||
base_height_target = 1.
|
||||
max_contact_force = 100. # forces above this value are penalized
|
||||
|
||||
class normalization:
|
||||
class obs_scales:
|
||||
lin_vel = 2.0
|
||||
ang_vel = 0.25
|
||||
dof_pos = 1.0
|
||||
dof_vel = 0.05
|
||||
height_measurements = 5.0
|
||||
|
||||
clip_observations = 100.
|
||||
clip_actions = 100.
|
||||
|
||||
class noise:
|
||||
add_noise = True
|
||||
noise_level = 1.0 # scales other values
|
||||
|
||||
class noise_scales:
|
||||
dof_pos = 0.01
|
||||
dof_vel = 1.5
|
||||
lin_vel = 0.1
|
||||
ang_vel = 0.2
|
||||
gravity = 0.05
|
||||
height_measurements = 0.1
|
||||
|
||||
# viewer camera:
|
||||
class viewer:
|
||||
ref_env = 0
|
||||
pos = [10, 0, 6] # [m]
|
||||
lookat = [11., 5, 3.] # [m]
|
||||
|
||||
class sim:
|
||||
dt = 0.005
|
||||
substeps = 1
|
||||
gravity = [0., 0., -9.81] # [m/s^2]
|
||||
up_axis = 1 # 0 is y, 1 is z
|
||||
|
||||
class physx:
|
||||
num_threads = 10
|
||||
solver_type = 1 # 0: pgs, 1: tgs
|
||||
num_position_iterations = 4
|
||||
num_velocity_iterations = 0
|
||||
contact_offset = 0.01 # [m]
|
||||
rest_offset = 0.0 # [m]
|
||||
bounce_threshold_velocity = 0.5 # 0.5 [m/s]
|
||||
max_depenetration_velocity = 1.0
|
||||
max_gpu_contact_pairs = 2 ** 23 # 2**24 -> needed for 8000 envs and more
|
||||
default_buffer_size_multiplier = 5
|
||||
contact_collection = 2 # 0: never, 1: last sub-step, 2: all sub-steps (default=2)
|
||||
|
||||
|
||||
class LeggedRobotCfgPPO_PRO(BaseConfig):
|
||||
seed = 1
|
||||
runner_class_name = 'OnPolicyRunner'
|
||||
|
||||
class policy:
|
||||
init_noise_std = 1.0
|
||||
actor_hidden_dims = [512, 256, 128]
|
||||
critic_hidden_dims = [512, 256, 128]
|
||||
activation = 'elu' # can be elu, relu, selu, crelu, lrelu, tanh, sigmoid
|
||||
# only for 'ActorCriticRecurrent':
|
||||
# rnn_type = 'lstm'
|
||||
# rnn_hidden_size = 512
|
||||
# rnn_num_layers = 1
|
||||
|
||||
class algorithm:
|
||||
# training params
|
||||
value_loss_coef = 1.0
|
||||
use_clipped_value_loss = True
|
||||
clip_param = 0.2
|
||||
entropy_coef = 0.01
|
||||
num_learning_epochs = 5
|
||||
num_mini_batches = 4 # mini batch size = num_envs*nsteps / nminibatches
|
||||
learning_rate = 1.e-3 # 5.e-4
|
||||
schedule = 'adaptive' # could be adaptive, fixed
|
||||
gamma = 0.99
|
||||
lam = 0.95
|
||||
desired_kl = 0.01
|
||||
max_grad_norm = 1.
|
||||
|
||||
class runner:
|
||||
policy_class_name = 'ActorCritic'
|
||||
algorithm_class_name = 'PPO'
|
||||
num_steps_per_env = 24 # per iteration
|
||||
max_iterations = 1500 # number of policy updates
|
||||
|
||||
# logging
|
||||
save_interval = 50 # check for potential saves every this many iterations
|
||||
experiment_name = 'test'
|
||||
run_name = ''
|
||||
# load and resume
|
||||
resume = False
|
||||
load_run = -1 # -1 = last run
|
||||
checkpoint = -1 # -1 = last saved model
|
||||
resume_path = None # updated from load_run and chkpt
|
||||
938
legged_gym/envs/proto/legged_robot_pro.py
Normal file
938
legged_gym/envs/proto/legged_robot_pro.py
Normal file
@@ -0,0 +1,938 @@
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, envs
|
||||
import time
|
||||
from warnings import WarningMessage
|
||||
import numpy as np
|
||||
import os
|
||||
|
||||
from isaacgym.torch_utils import *
|
||||
from isaacgym import gymtorch, gymapi, gymutil
|
||||
|
||||
import torch
|
||||
from torch import Tensor
|
||||
from typing import Tuple, Dict
|
||||
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
from legged_gym.envs.base.base_task import BaseTask
|
||||
from legged_gym.utils.terrain import Terrain
|
||||
from legged_gym.utils.math import quat_apply_yaw, wrap_to_pi, torch_rand_sqrt_float
|
||||
from legged_gym.utils.isaacgym_utils import get_euler_xyz as get_euler_xyz_in_tensor
|
||||
from legged_gym.utils.helpers import class_to_dict
|
||||
from legged_gym.envs.proto.legged_robot_config_pro import LeggedRobotCfg_PRO
|
||||
|
||||
class LeggedRobot_PRO(BaseTask):
|
||||
def __init__(self, cfg: LeggedRobotCfg_PRO, sim_params, physics_engine, sim_device, headless):
|
||||
""" Parses the provided config file,
|
||||
calls create_sim() (which creates, simulation, terrain and environments),
|
||||
initilizes pytorch buffers used during training
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
sim_params (gymapi.SimParams): simulation parameters
|
||||
physics_engine (gymapi.SimType): gymapi.SIM_PHYSX (must be PhysX)
|
||||
device_type (string): 'cuda' or 'cpu'
|
||||
device_id (int): 0, 1, ...
|
||||
headless (bool): Run without rendering if True
|
||||
"""
|
||||
self.cfg = cfg
|
||||
self.sim_params = sim_params
|
||||
self.height_samples = None
|
||||
self.debug_viz = False
|
||||
self.init_done = False
|
||||
self._parse_cfg(self.cfg)
|
||||
super().__init__(self.cfg, sim_params, physics_engine, sim_device, headless)
|
||||
|
||||
if not self.headless:
|
||||
self.set_camera(self.cfg.viewer.pos, self.cfg.viewer.lookat)
|
||||
self._init_buffers()
|
||||
self._prepare_reward_function()
|
||||
self.init_done = True
|
||||
|
||||
|
||||
def step(self, actions):
|
||||
""" Apply actions, simulate, call self.post_physics_step()
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Tensor of shape (num_envs, num_actions_per_env)
|
||||
"""
|
||||
clip_actions = self.cfg.normalization.clip_actions
|
||||
self.actions = torch.clip(actions, -clip_actions, clip_actions).to(self.device)
|
||||
# step physics and render each frame
|
||||
self.render()
|
||||
for _ in range(self.cfg.control.decimation):
|
||||
self.torques = self._compute_torques(self.actions).view(self.torques.shape)
|
||||
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(self.torques))
|
||||
self.gym.simulate(self.sim)
|
||||
if self.device == 'cpu':
|
||||
self.gym.fetch_results(self.sim, True)
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.post_physics_step()
|
||||
|
||||
# return clipped obs, clipped states (None), rewards, dones and infos
|
||||
clip_obs = self.cfg.normalization.clip_observations
|
||||
self.obs_buf = torch.clip(self.obs_buf, -clip_obs, clip_obs)
|
||||
if self.privileged_obs_buf is not None:
|
||||
self.privileged_obs_buf = torch.clip(self.privileged_obs_buf, -clip_obs, clip_obs)
|
||||
return self.obs_buf, self.privileged_obs_buf, self.rew_buf, self.reset_buf, self.extras
|
||||
|
||||
def post_physics_step(self):
|
||||
""" check terminations, compute observations and rewards
|
||||
calls self._post_physics_step_callback() for common computations
|
||||
calls self._draw_debug_vis() if needed
|
||||
"""
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
|
||||
self.episode_length_buf += 1
|
||||
self.common_step_counter += 1
|
||||
|
||||
# prepare quantities
|
||||
self.base_quat[:] = self.root_states[:, 3:7]
|
||||
self.base_lin_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel[:] = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity[:] = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
|
||||
self._post_physics_step_callback()
|
||||
|
||||
# compute observations, rewards, resets, ...
|
||||
self.check_termination()
|
||||
self.compute_reward()
|
||||
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
|
||||
self.reset_idx(env_ids)
|
||||
self.compute_observations() # in some cases a simulation step might be required to refresh some obs (for example body positions)
|
||||
|
||||
self.last_actions[:] = self.actions[:]
|
||||
self.last_dof_vel[:] = self.dof_vel[:]
|
||||
self.last_root_vel[:] = self.root_states[:, 7:13]
|
||||
|
||||
if self.viewer and self.enable_viewer_sync and self.debug_viz:
|
||||
self._draw_debug_vis()
|
||||
|
||||
def check_termination(self):
|
||||
""" Check if environments need to be reset
|
||||
"""
|
||||
self.reset_buf = torch.any(torch.norm(self.contact_forces[:, self.termination_contact_indices, :], dim=-1) > 1.,
|
||||
dim=1)
|
||||
self.time_out_buf = self.episode_length_buf > self.max_episode_length # no terminal reward for time-outs
|
||||
self.reset_buf |= self.time_out_buf
|
||||
|
||||
def reset_idx(self, env_ids):
|
||||
""" Reset some environments.
|
||||
Calls self._reset_dofs(env_ids), self._reset_root_states(env_ids), and self._resample_commands(env_ids)
|
||||
[Optional] calls self._update_terrain_curriculum(env_ids), self.update_command_curriculum(env_ids) and
|
||||
Logs episode info
|
||||
Resets some buffers
|
||||
|
||||
Args:
|
||||
env_ids (list[int]): List of environment ids which must be reset
|
||||
"""
|
||||
if len(env_ids) == 0:
|
||||
return
|
||||
# update curriculum
|
||||
if self.cfg.terrain.curriculum:
|
||||
self._update_terrain_curriculum(env_ids)
|
||||
# avoid updating command curriculum at each step since the maximum command is common to all envs
|
||||
if self.cfg.commands.curriculum and (self.common_step_counter % self.max_episode_length == 0):
|
||||
self.update_command_curriculum(env_ids)
|
||||
|
||||
# reset robot states
|
||||
self._reset_dofs(env_ids)
|
||||
self._reset_root_states(env_ids)
|
||||
|
||||
self._resample_commands(env_ids)
|
||||
|
||||
# reset buffers
|
||||
self.last_actions[env_ids] = 0.
|
||||
self.last_dof_vel[env_ids] = 0.
|
||||
self.feet_air_time[env_ids] = 0.
|
||||
self.episode_length_buf[env_ids] = 0
|
||||
self.reset_buf[env_ids] = 1
|
||||
# fill extras
|
||||
self.extras["episode"] = {}
|
||||
for key in self.episode_sums.keys():
|
||||
self.extras["episode"]['rew_' + key] = torch.mean(
|
||||
self.episode_sums[key][env_ids]) / self.max_episode_length_s
|
||||
self.episode_sums[key][env_ids] = 0.
|
||||
# log additional curriculum info
|
||||
if self.cfg.terrain.curriculum:
|
||||
self.extras["episode"]["terrain_level"] = torch.mean(self.terrain_levels.float())
|
||||
if self.cfg.commands.curriculum:
|
||||
self.extras["episode"]["max_command_x"] = self.command_ranges["lin_vel_x"][1]
|
||||
# send timeout info to the algorithm
|
||||
if self.cfg.env.send_timeouts:
|
||||
self.extras["time_outs"] = self.time_out_buf
|
||||
|
||||
def compute_reward(self):
|
||||
""" Compute rewards
|
||||
Calls each reward function which had a non-zero scale (processed in self._prepare_reward_function())
|
||||
adds each terms to the episode sums and to the total reward
|
||||
"""
|
||||
self.rew_buf[:] = 0.
|
||||
for i in range(len(self.reward_functions)):
|
||||
name = self.reward_names[i]
|
||||
rew = self.reward_functions[i]() * self.reward_scales[name]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums[name] += rew
|
||||
if self.cfg.rewards.only_positive_rewards:
|
||||
self.rew_buf[:] = torch.clip(self.rew_buf[:], min=0.)
|
||||
# add termination reward after clipping
|
||||
if "termination" in self.reward_scales:
|
||||
rew = self._reward_termination() * self.reward_scales["termination"]
|
||||
self.rew_buf += rew
|
||||
self.episode_sums["termination"] += rew
|
||||
|
||||
def compute_observations(self):
|
||||
""" Computes observations
|
||||
"""
|
||||
self.obs_buf = torch.cat((self.base_lin_vel * self.obs_scales.lin_vel,
|
||||
self.base_ang_vel * self.obs_scales.ang_vel,
|
||||
self.projected_gravity,
|
||||
self.commands[:, :3] * self.commands_scale,
|
||||
(self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos,
|
||||
self.dof_vel * self.obs_scales.dof_vel,
|
||||
self.actions
|
||||
), dim=-1)
|
||||
# add perceptive inputs if not blind
|
||||
if self.cfg.terrain.measure_heights:
|
||||
heights = torch.clip(self.root_states[:, 2].unsqueeze(1) - 0.5 - self.measured_heights, -1,
|
||||
1.) * self.obs_scales.height_measurements
|
||||
self.obs_buf = torch.cat((self.obs_buf, heights), dim=-1)
|
||||
# add noise if needed
|
||||
if self.add_noise:
|
||||
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
|
||||
|
||||
def create_sim(self):
|
||||
""" Creates simulation, terrain and evironments
|
||||
"""
|
||||
self.up_axis_idx = 2 # 2 for z, 1 for y -> adapt gravity accordingly
|
||||
self.sim = self.gym.create_sim(self.sim_device_id, self.graphics_device_id, self.physics_engine,
|
||||
self.sim_params)
|
||||
mesh_type = self.cfg.terrain.mesh_type
|
||||
if mesh_type in ['heightfield', 'trimesh']:
|
||||
self.terrain = Terrain(self.cfg.terrain, self.num_envs)
|
||||
if mesh_type == 'plane':
|
||||
self._create_ground_plane()
|
||||
elif mesh_type == 'heightfield':
|
||||
self._create_heightfield()
|
||||
elif mesh_type == 'trimesh':
|
||||
self._create_trimesh()
|
||||
elif mesh_type is not None:
|
||||
raise ValueError("Terrain mesh type not recognised. Allowed types are [None, plane, heightfield, trimesh]")
|
||||
self._create_envs()
|
||||
|
||||
def set_camera(self, position, lookat):
|
||||
""" Set camera position and direction
|
||||
"""
|
||||
cam_pos = gymapi.Vec3(position[0], position[1], position[2])
|
||||
cam_target = gymapi.Vec3(lookat[0], lookat[1], lookat[2])
|
||||
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
|
||||
|
||||
# ------------- Callbacks --------------
|
||||
def _process_rigid_shape_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the rigid shape properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: randomizes the friction of each environment
|
||||
|
||||
Args:
|
||||
props (List[gymapi.RigidShapeProperties]): Properties of each shape of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[List[gymapi.RigidShapeProperties]]: Modified rigid shape properties
|
||||
"""
|
||||
if self.cfg.domain_rand.randomize_friction:
|
||||
if env_id == 0:
|
||||
# prepare friction randomization
|
||||
friction_range = self.cfg.domain_rand.friction_range
|
||||
num_buckets = 64
|
||||
bucket_ids = torch.randint(0, num_buckets, (self.num_envs, 1))
|
||||
friction_buckets = torch_rand_float(friction_range[0], friction_range[1], (num_buckets, 1),
|
||||
device='cpu')
|
||||
self.friction_coeffs = friction_buckets[bucket_ids]
|
||||
|
||||
for s in range(len(props)):
|
||||
props[s].friction = self.friction_coeffs[env_id]
|
||||
return props
|
||||
|
||||
def _process_dof_props(self, props, env_id):
|
||||
""" Callback allowing to store/change/randomize the DOF properties of each environment.
|
||||
Called During environment creation.
|
||||
Base behavior: stores position, velocity and torques limits defined in the URDF
|
||||
|
||||
Args:
|
||||
props (numpy.array): Properties of each DOF of the asset
|
||||
env_id (int): Environment id
|
||||
|
||||
Returns:
|
||||
[numpy.array]: Modified DOF properties
|
||||
"""
|
||||
if env_id == 0:
|
||||
self.dof_pos_limits = torch.zeros(self.num_dof, 2, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.dof_vel_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.torque_limits = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(len(props)):
|
||||
self.dof_pos_limits[i, 0] = props["lower"][i].item()
|
||||
self.dof_pos_limits[i, 1] = props["upper"][i].item()
|
||||
self.dof_vel_limits[i] = props["velocity"][i].item()
|
||||
self.torque_limits[i] = props["effort"][i].item()
|
||||
# soft limits
|
||||
m = (self.dof_pos_limits[i, 0] + self.dof_pos_limits[i, 1]) / 2
|
||||
r = self.dof_pos_limits[i, 1] - self.dof_pos_limits[i, 0]
|
||||
self.dof_pos_limits[i, 0] = m - 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
self.dof_pos_limits[i, 1] = m + 0.5 * r * self.cfg.rewards.soft_dof_pos_limit
|
||||
return props
|
||||
|
||||
def _process_rigid_body_props(self, props, env_id):
|
||||
# if env_id==0:
|
||||
# sum = 0
|
||||
# for i, p in enumerate(props):
|
||||
# sum += p.mass
|
||||
# print(f"Mass of body {i}: {p.mass} (before randomization)")
|
||||
# print(f"Total mass {sum} (before randomization)")
|
||||
# randomize base mass
|
||||
if self.cfg.domain_rand.randomize_base_mass:
|
||||
rng = self.cfg.domain_rand.added_mass_range
|
||||
props[0].mass += np.random.uniform(rng[0], rng[1])
|
||||
return props
|
||||
|
||||
def _post_physics_step_callback(self):
|
||||
""" Callback called before computing terminations, rewards, and observations
|
||||
Default behaviour: Compute ang vel command based on target and heading, compute measured terrain heights and randomly push robots
|
||||
"""
|
||||
#
|
||||
env_ids = (self.episode_length_buf % int(self.cfg.commands.resampling_time / self.dt) == 0).nonzero(
|
||||
as_tuple=False).flatten()
|
||||
self._resample_commands(env_ids)
|
||||
if self.cfg.commands.heading_command:
|
||||
forward = quat_apply(self.base_quat, self.forward_vec)
|
||||
heading = torch.atan2(forward[:, 1], forward[:, 0])
|
||||
self.commands[:, 2] = torch.clip(0.5 * wrap_to_pi(self.commands[:, 3] - heading), -1., 1.)
|
||||
|
||||
if self.cfg.terrain.measure_heights:
|
||||
self.measured_heights = self._get_heights()
|
||||
if self.cfg.domain_rand.push_robots and (self.common_step_counter % self.cfg.domain_rand.push_interval == 0):
|
||||
self._push_robots()
|
||||
|
||||
def _resample_commands(self, env_ids):
|
||||
""" Randommly select commands of some environments
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environments ids for which new commands are needed
|
||||
"""
|
||||
self.commands[env_ids, 0] = torch_rand_float(self.command_ranges["lin_vel_x"][0],
|
||||
self.command_ranges["lin_vel_x"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
self.commands[env_ids, 1] = torch_rand_float(self.command_ranges["lin_vel_y"][0],
|
||||
self.command_ranges["lin_vel_y"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
if self.cfg.commands.heading_command:
|
||||
self.commands[env_ids, 3] = torch_rand_float(self.command_ranges["heading"][0],
|
||||
self.command_ranges["heading"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
else:
|
||||
self.commands[env_ids, 2] = torch_rand_float(self.command_ranges["ang_vel_yaw"][0],
|
||||
self.command_ranges["ang_vel_yaw"][1], (len(env_ids), 1),
|
||||
device=self.device).squeeze(1)
|
||||
|
||||
# set small commands to zero
|
||||
self.commands[env_ids, :2] *= (torch.norm(self.commands[env_ids, :2], dim=1) > 0.2).unsqueeze(1)
|
||||
|
||||
def _compute_torques(self, actions):
|
||||
""" Compute torques from actions.
|
||||
Actions can be interpreted as position or velocity targets given to a PD controller, or directly as scaled torques.
|
||||
[NOTE]: torques must have the same dimension as the number of DOFs, even if some DOFs are not actuated.
|
||||
|
||||
Args:
|
||||
actions (torch.Tensor): Actions
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Torques sent to the simulation
|
||||
"""
|
||||
# pd controller
|
||||
actions_scaled = actions * self.cfg.control.action_scale
|
||||
control_type = self.cfg.control.control_type
|
||||
if control_type == "P":
|
||||
torques = self.p_gains * (
|
||||
actions_scaled + self.default_dof_pos - self.dof_pos) - self.d_gains * self.dof_vel
|
||||
elif control_type == "V":
|
||||
torques = self.p_gains * (actions_scaled - self.dof_vel) - self.d_gains * (
|
||||
self.dof_vel - self.last_dof_vel) / self.sim_params.dt
|
||||
elif control_type == "T":
|
||||
torques = actions_scaled
|
||||
else:
|
||||
raise NameError(f"Unknown controller type: {control_type}")
|
||||
return torch.clip(torques, -self.torque_limits, self.torque_limits)
|
||||
|
||||
def _reset_dofs(self, env_ids):
|
||||
""" Resets DOF position and velocities of selected environmments
|
||||
Positions are randomly selected within 0.5:1.5 x default positions.
|
||||
Velocities are set to zero.
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
self.dof_pos[env_ids] = self.default_dof_pos * torch_rand_float(0.5, 1.5, (len(env_ids), self.num_dof),
|
||||
device=self.device)
|
||||
self.dof_vel[env_ids] = 0.
|
||||
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_dof_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.dof_state),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
def _reset_root_states(self, env_ids):
|
||||
""" Resets ROOT states position and velocities of selected environmments
|
||||
Sets base position based on the curriculum
|
||||
Selects randomized base velocities within -0.5:0.5 [m/s, rad/s]
|
||||
Args:
|
||||
env_ids (List[int]): Environemnt ids
|
||||
"""
|
||||
# base position
|
||||
if self.custom_origins:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
self.root_states[env_ids, :2] += torch_rand_float(-1., 1., (len(env_ids), 2),
|
||||
device=self.device) # xy position within 1m of the center
|
||||
else:
|
||||
self.root_states[env_ids] = self.base_init_state
|
||||
self.root_states[env_ids, :3] += self.env_origins[env_ids]
|
||||
# base velocities
|
||||
self.root_states[env_ids, 7:13] = torch_rand_float(-0.5, 0.5, (len(env_ids), 6),
|
||||
device=self.device) # [7:10]: lin vel, [10:13]: ang vel
|
||||
env_ids_int32 = env_ids.to(dtype=torch.int32)
|
||||
self.gym.set_actor_root_state_tensor_indexed(self.sim,
|
||||
gymtorch.unwrap_tensor(self.root_states),
|
||||
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
|
||||
|
||||
def _push_robots(self):
|
||||
""" Random pushes the robots. Emulates an impulse by setting a randomized base velocity.
|
||||
"""
|
||||
max_vel = self.cfg.domain_rand.max_push_vel_xy
|
||||
self.root_states[:, 7:9] = torch_rand_float(-max_vel, max_vel, (self.num_envs, 2),
|
||||
device=self.device) # lin vel x/y
|
||||
self.gym.set_actor_root_state_tensor(self.sim, gymtorch.unwrap_tensor(self.root_states))
|
||||
|
||||
def _update_terrain_curriculum(self, env_ids):
|
||||
""" Implements the game-inspired curriculum.
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): ids of environments being reset
|
||||
"""
|
||||
# Implement Terrain curriculum
|
||||
if not self.init_done:
|
||||
# don't change on initial reset
|
||||
return
|
||||
distance = torch.norm(self.root_states[env_ids, :2] - self.env_origins[env_ids, :2], dim=1)
|
||||
# robots that walked far enough progress to harder terains
|
||||
move_up = distance > self.terrain.env_length / 2
|
||||
# robots that walked less than half of their required distance go to simpler terrains
|
||||
move_down = (distance < torch.norm(self.commands[env_ids, :2],
|
||||
dim=1) * self.max_episode_length_s * 0.5) * ~move_up
|
||||
self.terrain_levels[env_ids] += 1 * move_up - 1 * move_down
|
||||
# Robots that solve the last level are sent to a random one
|
||||
self.terrain_levels[env_ids] = torch.where(self.terrain_levels[env_ids] >= self.max_terrain_level,
|
||||
torch.randint_like(self.terrain_levels[env_ids],
|
||||
self.max_terrain_level),
|
||||
torch.clip(self.terrain_levels[env_ids],
|
||||
0)) # (the minumum level is zero)
|
||||
self.env_origins[env_ids] = self.terrain_origins[self.terrain_levels[env_ids], self.terrain_types[env_ids]]
|
||||
|
||||
def update_command_curriculum(self, env_ids):
|
||||
""" Implements a curriculum of increasing commands
|
||||
|
||||
Args:
|
||||
env_ids (List[int]): ids of environments being reset
|
||||
"""
|
||||
# If the tracking reward is above 80% of the maximum, increase the range of commands
|
||||
if torch.mean(self.episode_sums["tracking_lin_vel"][env_ids]) / self.max_episode_length > 0.8 * \
|
||||
self.reward_scales["tracking_lin_vel"]:
|
||||
self.command_ranges["lin_vel_x"][0] = np.clip(self.command_ranges["lin_vel_x"][0] - 0.5,
|
||||
-self.cfg.commands.max_curriculum, 0.)
|
||||
self.command_ranges["lin_vel_x"][1] = np.clip(self.command_ranges["lin_vel_x"][1] + 0.5, 0.,
|
||||
self.cfg.commands.max_curriculum)
|
||||
|
||||
def _get_noise_scale_vec(self, cfg):
|
||||
""" Sets a vector used to scale the noise added to the observations.
|
||||
[NOTE]: Must be adapted when changing the observations structure
|
||||
|
||||
Args:
|
||||
cfg (Dict): Environment config file
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Vector of scales used to multiply a uniform distribution in [-1, 1]
|
||||
"""
|
||||
noise_vec = torch.zeros_like(self.obs_buf[0])
|
||||
self.add_noise = self.cfg.noise.add_noise
|
||||
noise_scales = self.cfg.noise.noise_scales
|
||||
noise_level = self.cfg.noise.noise_level
|
||||
noise_vec[:3] = noise_scales.lin_vel * noise_level * self.obs_scales.lin_vel
|
||||
noise_vec[3:6] = noise_scales.ang_vel * noise_level * self.obs_scales.ang_vel
|
||||
noise_vec[6:9] = noise_scales.gravity * noise_level
|
||||
noise_vec[9:12] = 0. # commands
|
||||
noise_vec[12:24] = noise_scales.dof_pos * noise_level * self.obs_scales.dof_pos
|
||||
noise_vec[24:36] = noise_scales.dof_vel * noise_level * self.obs_scales.dof_vel
|
||||
noise_vec[36:48] = 0. # previous actions
|
||||
if self.cfg.terrain.measure_heights:
|
||||
noise_vec[48:235] = noise_scales.height_measurements * noise_level * self.obs_scales.height_measurements
|
||||
return noise_vec
|
||||
|
||||
# ----------------------------------------
|
||||
def _init_buffers(self):
|
||||
""" Initialize torch tensors which will contain simulation states and processed quantities
|
||||
"""
|
||||
# get gym GPU state tensors
|
||||
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
|
||||
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
|
||||
net_contact_forces = self.gym.acquire_net_contact_force_tensor(self.sim)
|
||||
self.gym.refresh_dof_state_tensor(self.sim)
|
||||
self.gym.refresh_actor_root_state_tensor(self.sim)
|
||||
self.gym.refresh_net_contact_force_tensor(self.sim)
|
||||
|
||||
# create some wrapper tensors for different slices
|
||||
self.root_states = gymtorch.wrap_tensor(actor_root_state)
|
||||
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
|
||||
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
|
||||
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
|
||||
self.base_quat = self.root_states[:, 3:7]
|
||||
|
||||
self.contact_forces = gymtorch.wrap_tensor(net_contact_forces).view(self.num_envs, -1,
|
||||
3) # shape: num_envs, num_bodies, xyz axis
|
||||
|
||||
# initialize some data used later on
|
||||
self.common_step_counter = 0
|
||||
self.extras = {}
|
||||
self.noise_scale_vec = self._get_noise_scale_vec(self.cfg)
|
||||
self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat(
|
||||
(self.num_envs, 1))
|
||||
self.forward_vec = to_torch([1., 0., 0.], device=self.device).repeat((self.num_envs, 1))
|
||||
self.torques = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.p_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.d_gains = torch.zeros(self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
self.actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.last_actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device,
|
||||
requires_grad=False)
|
||||
self.last_dof_vel = torch.zeros_like(self.dof_vel)
|
||||
self.last_root_vel = torch.zeros_like(self.root_states[:, 7:13])
|
||||
self.commands = torch.zeros(self.num_envs, self.cfg.commands.num_commands, dtype=torch.float,
|
||||
device=self.device, requires_grad=False) # x vel, y vel, yaw vel, heading
|
||||
self.commands_scale = torch.tensor([self.obs_scales.lin_vel, self.obs_scales.lin_vel, self.obs_scales.ang_vel],
|
||||
device=self.device, requires_grad=False, ) # TODO change this
|
||||
self.feet_air_time = torch.zeros(self.num_envs, self.feet_indices.shape[0], dtype=torch.float,
|
||||
device=self.device, requires_grad=False)
|
||||
self.last_contacts = torch.zeros(self.num_envs, len(self.feet_indices), dtype=torch.bool, device=self.device,
|
||||
requires_grad=False)
|
||||
self.base_lin_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
|
||||
self.base_ang_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
|
||||
self.projected_gravity = quat_rotate_inverse(self.base_quat, self.gravity_vec)
|
||||
if self.cfg.terrain.measure_heights:
|
||||
self.height_points = self._init_height_points()
|
||||
self.measured_heights = 0
|
||||
|
||||
# joint positions offsets and PD gains
|
||||
self.default_dof_pos = torch.zeros(self.num_dof, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for i in range(self.num_dofs):
|
||||
name = self.dof_names[i]
|
||||
angle = self.cfg.init_state.default_joint_angles[name]
|
||||
self.default_dof_pos[i] = angle
|
||||
found = False
|
||||
for dof_name in self.cfg.control.stiffness.keys():
|
||||
if dof_name in name:
|
||||
self.p_gains[i] = self.cfg.control.stiffness[dof_name]
|
||||
self.d_gains[i] = self.cfg.control.damping[dof_name]
|
||||
found = True
|
||||
if not found:
|
||||
self.p_gains[i] = 0.
|
||||
self.d_gains[i] = 0.
|
||||
if self.cfg.control.control_type in ["P", "V"]:
|
||||
print(f"PD gain of joint {name} were not defined, setting them to zero")
|
||||
self.default_dof_pos = self.default_dof_pos.unsqueeze(0)
|
||||
|
||||
def _prepare_reward_function(self):
|
||||
""" Prepares a list of reward functions, whcih will be called to compute the total reward.
|
||||
Looks for self._reward_<REWARD_NAME>, where <REWARD_NAME> are names of all non zero reward scales in the cfg.
|
||||
"""
|
||||
# remove zero scales + multiply non-zero ones by dt
|
||||
for key in list(self.reward_scales.keys()):
|
||||
scale = self.reward_scales[key]
|
||||
if scale == 0:
|
||||
self.reward_scales.pop(key)
|
||||
else:
|
||||
self.reward_scales[key] *= self.dt
|
||||
# prepare list of functions
|
||||
self.reward_functions = []
|
||||
self.reward_names = []
|
||||
for name, scale in self.reward_scales.items():
|
||||
if name == "termination":
|
||||
continue
|
||||
self.reward_names.append(name)
|
||||
name = '_reward_' + name
|
||||
self.reward_functions.append(getattr(self, name))
|
||||
|
||||
# reward episode sums
|
||||
self.episode_sums = {
|
||||
name: torch.zeros(self.num_envs, dtype=torch.float, device=self.device, requires_grad=False)
|
||||
for name in self.reward_scales.keys()}
|
||||
|
||||
def _create_ground_plane(self):
|
||||
""" Adds a ground plane to the simulation, sets friction and restitution based on the cfg.
|
||||
"""
|
||||
plane_params = gymapi.PlaneParams()
|
||||
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
|
||||
plane_params.static_friction = self.cfg.terrain.static_friction
|
||||
plane_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
plane_params.restitution = self.cfg.terrain.restitution
|
||||
self.gym.add_ground(self.sim, plane_params)
|
||||
|
||||
def _create_heightfield(self):
|
||||
""" Adds a heightfield terrain to the simulation, sets parameters based on the cfg.
|
||||
"""
|
||||
hf_params = gymapi.HeightFieldParams()
|
||||
hf_params.column_scale = self.terrain.cfg.horizontal_scale
|
||||
hf_params.row_scale = self.terrain.cfg.horizontal_scale
|
||||
hf_params.vertical_scale = self.terrain.cfg.vertical_scale
|
||||
hf_params.nbRows = self.terrain.tot_cols
|
||||
hf_params.nbColumns = self.terrain.tot_rows
|
||||
hf_params.transform.p.x = -self.terrain.cfg.border_size
|
||||
hf_params.transform.p.y = -self.terrain.cfg.border_size
|
||||
hf_params.transform.p.z = 0.0
|
||||
hf_params.static_friction = self.cfg.terrain.static_friction
|
||||
hf_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
hf_params.restitution = self.cfg.terrain.restitution
|
||||
|
||||
self.gym.add_heightfield(self.sim, self.terrain.heightsamples, hf_params)
|
||||
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows,
|
||||
self.terrain.tot_cols).to(self.device)
|
||||
|
||||
def _create_trimesh(self):
|
||||
""" Adds a triangle mesh terrain to the simulation, sets parameters based on the cfg.
|
||||
# """
|
||||
tm_params = gymapi.TriangleMeshParams()
|
||||
tm_params.nb_vertices = self.terrain.vertices.shape[0]
|
||||
tm_params.nb_triangles = self.terrain.triangles.shape[0]
|
||||
|
||||
tm_params.transform.p.x = -self.terrain.cfg.border_size
|
||||
tm_params.transform.p.y = -self.terrain.cfg.border_size
|
||||
tm_params.transform.p.z = 0.0
|
||||
tm_params.static_friction = self.cfg.terrain.static_friction
|
||||
tm_params.dynamic_friction = self.cfg.terrain.dynamic_friction
|
||||
tm_params.restitution = self.cfg.terrain.restitution
|
||||
self.gym.add_triangle_mesh(self.sim, self.terrain.vertices.flatten(order='C'),
|
||||
self.terrain.triangles.flatten(order='C'), tm_params)
|
||||
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows,
|
||||
self.terrain.tot_cols).to(self.device)
|
||||
|
||||
def _create_envs(self):
|
||||
""" Creates environments:
|
||||
1. loads the robot URDF/MJCF asset,
|
||||
2. For each environment
|
||||
2.1 creates the environment,
|
||||
2.2 calls DOF and Rigid shape properties callbacks,
|
||||
2.3 create actor with these properties and add them to the env
|
||||
3. Store indices of different bodies of the robot
|
||||
"""
|
||||
asset_path = self.cfg.asset.file.format(LOCO_MANI_GYM_ROOT_DIR=LOCO_MANI_GYM_ROOT_DIR)
|
||||
asset_root = os.path.dirname(asset_path)
|
||||
asset_file = os.path.basename(asset_path)
|
||||
|
||||
asset_options = gymapi.AssetOptions()
|
||||
asset_options.default_dof_drive_mode = self.cfg.asset.default_dof_drive_mode
|
||||
asset_options.collapse_fixed_joints = self.cfg.asset.collapse_fixed_joints
|
||||
asset_options.replace_cylinder_with_capsule = self.cfg.asset.replace_cylinder_with_capsule
|
||||
asset_options.flip_visual_attachments = self.cfg.asset.flip_visual_attachments
|
||||
asset_options.fix_base_link = self.cfg.asset.fix_base_link
|
||||
asset_options.density = self.cfg.asset.density
|
||||
asset_options.angular_damping = self.cfg.asset.angular_damping
|
||||
asset_options.linear_damping = self.cfg.asset.linear_damping
|
||||
asset_options.max_angular_velocity = self.cfg.asset.max_angular_velocity
|
||||
asset_options.max_linear_velocity = self.cfg.asset.max_linear_velocity
|
||||
asset_options.armature = self.cfg.asset.armature
|
||||
asset_options.thickness = self.cfg.asset.thickness
|
||||
asset_options.disable_gravity = self.cfg.asset.disable_gravity
|
||||
|
||||
robot_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
|
||||
self.num_dof = self.gym.get_asset_dof_count(robot_asset)
|
||||
self.num_bodies = self.gym.get_asset_rigid_body_count(robot_asset)
|
||||
dof_props_asset = self.gym.get_asset_dof_properties(robot_asset)
|
||||
rigid_shape_props_asset = self.gym.get_asset_rigid_shape_properties(robot_asset)
|
||||
|
||||
# save body names from the asset
|
||||
body_names = self.gym.get_asset_rigid_body_names(robot_asset)
|
||||
self.dof_names = self.gym.get_asset_dof_names(robot_asset)
|
||||
self.num_bodies = len(body_names)
|
||||
self.num_dofs = len(self.dof_names)
|
||||
feet_names = [s for s in body_names if self.cfg.asset.foot_name in s]
|
||||
penalized_contact_names = []
|
||||
for name in self.cfg.asset.penalize_contacts_on:
|
||||
penalized_contact_names.extend([s for s in body_names if name in s])
|
||||
termination_contact_names = []
|
||||
for name in self.cfg.asset.terminate_after_contacts_on:
|
||||
termination_contact_names.extend([s for s in body_names if name in s])
|
||||
|
||||
base_init_state_list = self.cfg.init_state.pos + self.cfg.init_state.rot + self.cfg.init_state.lin_vel + self.cfg.init_state.ang_vel
|
||||
self.base_init_state = to_torch(base_init_state_list, device=self.device, requires_grad=False)
|
||||
start_pose = gymapi.Transform()
|
||||
start_pose.p = gymapi.Vec3(*self.base_init_state[:3])
|
||||
|
||||
self._get_env_origins()
|
||||
env_lower = gymapi.Vec3(0., 0., 0.)
|
||||
env_upper = gymapi.Vec3(0., 0., 0.)
|
||||
self.actor_handles = []
|
||||
self.envs = []
|
||||
for i in range(self.num_envs):
|
||||
# create env instance
|
||||
env_handle = self.gym.create_env(self.sim, env_lower, env_upper, int(np.sqrt(self.num_envs)))
|
||||
pos = self.env_origins[i].clone()
|
||||
pos[:2] += torch_rand_float(-1., 1., (2, 1), device=self.device).squeeze(1)
|
||||
start_pose.p = gymapi.Vec3(*pos)
|
||||
|
||||
rigid_shape_props = self._process_rigid_shape_props(rigid_shape_props_asset, i)
|
||||
self.gym.set_asset_rigid_shape_properties(robot_asset, rigid_shape_props)
|
||||
actor_handle = self.gym.create_actor(env_handle, robot_asset, start_pose, self.cfg.asset.name, i,
|
||||
self.cfg.asset.self_collisions, 0)
|
||||
dof_props = self._process_dof_props(dof_props_asset, i)
|
||||
self.gym.set_actor_dof_properties(env_handle, actor_handle, dof_props)
|
||||
body_props = self.gym.get_actor_rigid_body_properties(env_handle, actor_handle)
|
||||
body_props = self._process_rigid_body_props(body_props, i)
|
||||
self.gym.set_actor_rigid_body_properties(env_handle, actor_handle, body_props, recomputeInertia=True)
|
||||
self.envs.append(env_handle)
|
||||
self.actor_handles.append(actor_handle)
|
||||
|
||||
self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
|
||||
for i in range(len(feet_names)):
|
||||
self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.actor_handles[0],
|
||||
feet_names[i])
|
||||
|
||||
self.penalised_contact_indices = torch.zeros(len(penalized_contact_names), dtype=torch.long, device=self.device,
|
||||
requires_grad=False)
|
||||
for i in range(len(penalized_contact_names)):
|
||||
self.penalised_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0],
|
||||
self.actor_handles[0],
|
||||
penalized_contact_names[i])
|
||||
|
||||
self.termination_contact_indices = torch.zeros(len(termination_contact_names), dtype=torch.long,
|
||||
device=self.device, requires_grad=False)
|
||||
for i in range(len(termination_contact_names)):
|
||||
self.termination_contact_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0],
|
||||
self.actor_handles[0],
|
||||
termination_contact_names[i])
|
||||
|
||||
def _get_env_origins(self):
|
||||
""" Sets environment origins. On rough terrain the origins are defined by the terrain platforms.
|
||||
Otherwise create a grid.
|
||||
"""
|
||||
if self.cfg.terrain.mesh_type in ["heightfield", "trimesh"]:
|
||||
self.custom_origins = True
|
||||
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
|
||||
# put robots at the origins defined by the terrain
|
||||
max_init_level = self.cfg.terrain.max_init_terrain_level
|
||||
if not self.cfg.terrain.curriculum: max_init_level = self.cfg.terrain.num_rows - 1
|
||||
self.terrain_levels = torch.randint(0, max_init_level + 1, (self.num_envs,), device=self.device)
|
||||
self.terrain_types = torch.div(torch.arange(self.num_envs, device=self.device),
|
||||
(self.num_envs / self.cfg.terrain.num_cols), rounding_mode='floor').to(
|
||||
torch.long)
|
||||
self.max_terrain_level = self.cfg.terrain.num_rows
|
||||
self.terrain_origins = torch.from_numpy(self.terrain.env_origins).to(self.device).to(torch.float)
|
||||
self.env_origins[:] = self.terrain_origins[self.terrain_levels, self.terrain_types]
|
||||
else:
|
||||
self.custom_origins = False
|
||||
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
|
||||
# create a grid of robots
|
||||
num_cols = np.floor(np.sqrt(self.num_envs))
|
||||
num_rows = np.ceil(self.num_envs / num_cols)
|
||||
xx, yy = torch.meshgrid(torch.arange(num_rows), torch.arange(num_cols))
|
||||
spacing = self.cfg.env.env_spacing
|
||||
self.env_origins[:, 0] = spacing * xx.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 1] = spacing * yy.flatten()[:self.num_envs]
|
||||
self.env_origins[:, 2] = 0.
|
||||
|
||||
def _parse_cfg(self, cfg):
|
||||
self.dt = self.cfg.control.decimation * self.sim_params.dt
|
||||
self.obs_scales = self.cfg.normalization.obs_scales
|
||||
self.reward_scales = class_to_dict(self.cfg.rewards.scales)
|
||||
self.command_ranges = class_to_dict(self.cfg.commands.ranges)
|
||||
if self.cfg.terrain.mesh_type not in ['heightfield', 'trimesh']:
|
||||
self.cfg.terrain.curriculum = False
|
||||
self.max_episode_length_s = self.cfg.env.episode_length_s
|
||||
self.max_episode_length = np.ceil(self.max_episode_length_s / self.dt)
|
||||
|
||||
self.cfg.domain_rand.push_interval = np.ceil(self.cfg.domain_rand.push_interval_s / self.dt)
|
||||
|
||||
def _draw_debug_vis(self):
|
||||
""" Draws visualizations for dubugging (slows down simulation a lot).
|
||||
Default behaviour: draws height measurement points
|
||||
"""
|
||||
# draw height lines
|
||||
if not self.terrain.cfg.measure_heights:
|
||||
return
|
||||
self.gym.clear_lines(self.viewer)
|
||||
self.gym.refresh_rigid_body_state_tensor(self.sim)
|
||||
sphere_geom = gymutil.WireframeSphereGeometry(0.02, 4, 4, None, color=(1, 1, 0))
|
||||
for i in range(self.num_envs):
|
||||
base_pos = (self.root_states[i, :3]).cpu().numpy()
|
||||
heights = self.measured_heights[i].cpu().numpy()
|
||||
height_points = quat_apply_yaw(self.base_quat[i].repeat(heights.shape[0]),
|
||||
self.height_points[i]).cpu().numpy()
|
||||
for j in range(heights.shape[0]):
|
||||
x = height_points[j, 0] + base_pos[0]
|
||||
y = height_points[j, 1] + base_pos[1]
|
||||
z = heights[j]
|
||||
sphere_pose = gymapi.Transform(gymapi.Vec3(x, y, z), r=None)
|
||||
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], sphere_pose)
|
||||
|
||||
def _init_height_points(self):
|
||||
""" Returns points at which the height measurments are sampled (in base frame)
|
||||
|
||||
Returns:
|
||||
[torch.Tensor]: Tensor of shape (num_envs, self.num_height_points, 3)
|
||||
"""
|
||||
y = torch.tensor(self.cfg.terrain.measured_points_y, device=self.device, requires_grad=False)
|
||||
x = torch.tensor(self.cfg.terrain.measured_points_x, device=self.device, requires_grad=False)
|
||||
grid_x, grid_y = torch.meshgrid(x, y)
|
||||
|
||||
self.num_height_points = grid_x.numel()
|
||||
points = torch.zeros(self.num_envs, self.num_height_points, 3, device=self.device, requires_grad=False)
|
||||
points[:, :, 0] = grid_x.flatten()
|
||||
points[:, :, 1] = grid_y.flatten()
|
||||
return points
|
||||
|
||||
def _get_heights(self, env_ids=None):
|
||||
""" Samples heights of the terrain at required points around each robot.
|
||||
The points are offset by the base's position and rotated by the base's yaw
|
||||
|
||||
Args:
|
||||
env_ids (List[int], optional): Subset of environments for which to return the heights. Defaults to None.
|
||||
|
||||
Raises:
|
||||
NameError: [description]
|
||||
|
||||
Returns:
|
||||
[type]: [description]
|
||||
"""
|
||||
if self.cfg.terrain.mesh_type == 'plane':
|
||||
return torch.zeros(self.num_envs, self.num_height_points, device=self.device, requires_grad=False)
|
||||
elif self.cfg.terrain.mesh_type == 'none':
|
||||
raise NameError("Can't measure height with terrain mesh type 'none'")
|
||||
|
||||
if env_ids:
|
||||
points = quat_apply_yaw(self.base_quat[env_ids].repeat(1, self.num_height_points),
|
||||
self.height_points[env_ids]) + (self.root_states[env_ids, :3]).unsqueeze(1)
|
||||
else:
|
||||
points = quat_apply_yaw(self.base_quat.repeat(1, self.num_height_points), self.height_points) + (
|
||||
self.root_states[:, :3]).unsqueeze(1)
|
||||
|
||||
points += self.terrain.cfg.border_size
|
||||
points = (points / self.terrain.cfg.horizontal_scale).long()
|
||||
px = points[:, :, 0].view(-1)
|
||||
py = points[:, :, 1].view(-1)
|
||||
px = torch.clip(px, 0, self.height_samples.shape[0] - 2)
|
||||
py = torch.clip(py, 0, self.height_samples.shape[1] - 2)
|
||||
|
||||
heights1 = self.height_samples[px, py]
|
||||
heights2 = self.height_samples[px + 1, py]
|
||||
heights3 = self.height_samples[px, py + 1]
|
||||
heights = torch.min(heights1, heights2)
|
||||
heights = torch.min(heights, heights3)
|
||||
|
||||
return heights.view(self.num_envs, -1) * self.terrain.cfg.vertical_scale
|
||||
|
||||
# ------------ reward functions----------------
|
||||
def _reward_lin_vel_z(self):
|
||||
# Penalize z axis base linear velocity
|
||||
return torch.square(self.base_lin_vel[:, 2])
|
||||
|
||||
def _reward_ang_vel_xy(self):
|
||||
# Penalize xy axes base angular velocity
|
||||
return torch.sum(torch.square(self.base_ang_vel[:, :2]), dim=1)
|
||||
|
||||
def _reward_orientation(self):
|
||||
# Penalize non flat base orientation
|
||||
return torch.sum(torch.square(self.projected_gravity[:, :2]), dim=1)
|
||||
|
||||
def _reward_base_height(self):
|
||||
# Penalize base height away from target
|
||||
base_height = torch.mean(self.root_states[:, 2].unsqueeze(1) - self.measured_heights, dim=1)
|
||||
return torch.square(base_height - self.cfg.rewards.base_height_target)
|
||||
|
||||
def _reward_torques(self):
|
||||
# Penalize torques
|
||||
return torch.sum(torch.square(self.torques), dim=1)
|
||||
|
||||
def _reward_dof_vel(self):
|
||||
# Penalize dof velocities
|
||||
return torch.sum(torch.square(self.dof_vel), dim=1)
|
||||
|
||||
def _reward_dof_acc(self):
|
||||
# Penalize dof accelerations
|
||||
return torch.sum(torch.square((self.last_dof_vel - self.dof_vel) / self.dt), dim=1)
|
||||
|
||||
def _reward_action_rate(self):
|
||||
# Penalize changes in actions
|
||||
return torch.sum(torch.square(self.last_actions - self.actions), dim=1)
|
||||
|
||||
def _reward_collision(self):
|
||||
# Penalize collisions on selected bodies
|
||||
return torch.sum(1. * (torch.norm(self.contact_forces[:, self.penalised_contact_indices, :], dim=-1) > 0.1),
|
||||
dim=1)
|
||||
|
||||
def _reward_termination(self):
|
||||
# Terminal reward / penalty
|
||||
return self.reset_buf * ~self.time_out_buf
|
||||
|
||||
def _reward_dof_pos_limits(self):
|
||||
# Penalize dof positions too close to the limit
|
||||
out_of_limits = -(self.dof_pos - self.dof_pos_limits[:, 0]).clip(max=0.) # lower limit
|
||||
out_of_limits += (self.dof_pos - self.dof_pos_limits[:, 1]).clip(min=0.)
|
||||
return torch.sum(out_of_limits, dim=1)
|
||||
|
||||
def _reward_dof_vel_limits(self):
|
||||
# Penalize dof velocities too close to the limit
|
||||
# clip to max error = 1 rad/s per joint to avoid huge penalties
|
||||
return torch.sum(
|
||||
(torch.abs(self.dof_vel) - self.dof_vel_limits * self.cfg.rewards.soft_dof_vel_limit).clip(min=0., max=1.),
|
||||
dim=1)
|
||||
|
||||
def _reward_torque_limits(self):
|
||||
# penalize torques too close to the limit
|
||||
return torch.sum(
|
||||
(torch.abs(self.torques) - self.torque_limits * self.cfg.rewards.soft_torque_limit).clip(min=0.), dim=1)
|
||||
|
||||
def _reward_tracking_lin_vel(self):
|
||||
# Tracking of linear velocity commands (xy axes)
|
||||
lin_vel_error = torch.sum(torch.square(self.commands[:, :2] - self.base_lin_vel[:, :2]), dim=1)
|
||||
return torch.exp(-lin_vel_error / self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_tracking_ang_vel(self):
|
||||
# Tracking of angular velocity commands (yaw)
|
||||
ang_vel_error = torch.square(self.commands[:, 2] - self.base_ang_vel[:, 2])
|
||||
return torch.exp(-ang_vel_error / self.cfg.rewards.tracking_sigma)
|
||||
|
||||
def _reward_feet_air_time(self):
|
||||
# Reward long steps
|
||||
# Need to filter the contacts because the contact reporting of PhysX is unreliable on meshes
|
||||
contact = self.contact_forces[:, self.feet_indices, 2] > 1.
|
||||
contact_filt = torch.logical_or(contact, self.last_contacts)
|
||||
self.last_contacts = contact
|
||||
first_contact = (self.feet_air_time > 0.) * contact_filt
|
||||
self.feet_air_time += self.dt
|
||||
rew_airTime = torch.sum((self.feet_air_time - 0.5) * first_contact,
|
||||
dim=1) # reward only on first contact with the ground
|
||||
rew_airTime *= torch.norm(self.commands[:, :2], dim=1) > 0.1 # no reward for zero command
|
||||
self.feet_air_time *= ~contact_filt
|
||||
return rew_airTime
|
||||
|
||||
def _reward_stumble(self):
|
||||
# Penalize feet hitting vertical surfaces
|
||||
return torch.any(torch.norm(self.contact_forces[:, self.feet_indices, :2], dim=2) > \
|
||||
5 * torch.abs(self.contact_forces[:, self.feet_indices, 2]), dim=1)
|
||||
|
||||
def _reward_stand_still(self):
|
||||
# Penalize motion at zero commands
|
||||
return torch.sum(torch.abs(self.dof_pos - self.default_dof_pos), dim=1) * (
|
||||
torch.norm(self.commands[:, :2], dim=1) < 0.1)
|
||||
|
||||
def _reward_feet_contact_forces(self):
|
||||
# penalize high contact forces
|
||||
return torch.sum((torch.norm(self.contact_forces[:, self.feet_indices, :],
|
||||
dim=-1) - self.cfg.rewards.max_contact_force).clip(min=0.), dim=1)
|
||||
187
legged_gym/envs/proto/pro_config.py
Normal file
187
legged_gym/envs/proto/pro_config.py
Normal file
@@ -0,0 +1,187 @@
|
||||
from legged_gym.envs.proto.legged_robot_config_pro import LeggedRobotCfg_PRO, LeggedRobotCfgPPO_PRO
|
||||
|
||||
class PRORoughCfg(LeggedRobotCfg_PRO):
|
||||
class env(LeggedRobotCfg_PRO.env):
|
||||
num_envs = 4096
|
||||
num_actions = 8
|
||||
num_observations = 45 #+ 187
|
||||
|
||||
class commands(LeggedRobotCfg_PRO.commands):
|
||||
curriculum = False
|
||||
max_curriculum = 1.5
|
||||
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
|
||||
resampling_time = 10. # time before command are changed[s]
|
||||
heading_command = False # if true: compute ang vel command from heading error
|
||||
|
||||
class ranges:
|
||||
lin_vel_x = [-1.5, 1.5] # min max [m/s]
|
||||
lin_vel_y = [-1.5, 1.5] # min max [m/s]
|
||||
ang_vel_yaw = [-1.0, 1.0] # min max [rad/s]
|
||||
heading = [-3.14, 3.14]
|
||||
|
||||
class terrain(LeggedRobotCfg_PRO.terrain):
|
||||
'''mesh_type = 'plane' # "heightfield" # none, plane, heightfield or trimesh
|
||||
horizontal_scale = 0.1 # [m]
|
||||
vertical_scale = 0.005 # [m]
|
||||
border_size = 25 # [m]
|
||||
curriculum = False
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
# rough terrain only:
|
||||
measure_heights = False
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,
|
||||
0.8] # 1mx1.6m rectangle (without center line)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows = 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
|
||||
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
|
||||
# trimesh only:
|
||||
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces'''
|
||||
|
||||
mesh_type = 'trimesh' # 使用三角网格表示复杂地形
|
||||
horizontal_scale = 0.1 # 水平缩放 (米/单位)
|
||||
vertical_scale = 0.005 # 垂直缩放 (米/单位),控制地形起伏程度
|
||||
border_size = 25. # 边界大小 (米)
|
||||
curriculum = False # 关闭课程学习,从一开始就在复杂地形上训练
|
||||
static_friction = 1.0
|
||||
dynamic_friction = 1.0
|
||||
restitution = 0.
|
||||
|
||||
# 地形类型比例 [平滑斜坡, 粗糙斜坡, 上楼梯, 下楼梯, 离散障碍]
|
||||
# 这里我们主要使用粗糙斜坡来创建略有高低不平的地形
|
||||
terrain_proportions = [0.0, 1.0, 0.0, 0.0, 0.0] # 100% 粗糙斜坡
|
||||
|
||||
# 地形测量参数
|
||||
measure_heights = False
|
||||
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8] # 1.6m 线 (共17个点)
|
||||
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5] # 11条线
|
||||
selected = False # select a unique terrain type and pass all arguments
|
||||
terrain_kwargs = None # Dict of arguments for selected terrain
|
||||
max_init_terrain_level = 5 # starting curriculum state
|
||||
terrain_length = 8.
|
||||
terrain_width = 8.
|
||||
num_rows = 10 # number of terrain rows (levels)
|
||||
num_cols = 20 # number of terrain cols (types)
|
||||
|
||||
# 地形生成参数 - 针对粗糙地形
|
||||
difficulty_scale = 1.0 # 难度缩放因子
|
||||
roughness = 0.3 # 粗糙度参数 (0-1),控制地形起伏程度
|
||||
# 对于略有高低不平的地形,使用中等粗糙度
|
||||
class init_state(LeggedRobotCfg_PRO.init_state):
|
||||
pos = [0.0, 0.0, 0.65] # x,y,z [m]
|
||||
default_joint_angles = { # = target angles [rad] when action = 0.0
|
||||
'L1_Joint': 0.0, # [rad]
|
||||
'L_wheel_1_Joint': 0.0, # [rad]
|
||||
'L2_Joint': 0.0, # [rad]
|
||||
'L_wheel_2_Joint': 0.0, # [rad]
|
||||
|
||||
'R1_Joint': 0.0, # [rad]
|
||||
'R_wheel_1_Joint': 0.0, # [rad]
|
||||
'R2_Joint': 0.0,
|
||||
'R_wheel_2_Joint': 0.0,
|
||||
}
|
||||
init_joint_angles = { # = target angles [rad] when action = 0.0
|
||||
'L1_Joint': 0.0, # [rad]
|
||||
'L_wheel_1_Joint': 0.0, # [rad]
|
||||
'L2_Joint': 0.0, # [rad]
|
||||
'L_wheel_2_Joint': 0.0, # [rad]
|
||||
|
||||
'R1_Joint': 0.0, # [rad]
|
||||
'R_wheel_1_Joint': 0.0, # [rad]
|
||||
'R2_Joint': 0.0,
|
||||
'R_wheel_2_Joint': 0.0,
|
||||
}
|
||||
|
||||
class rewards:
|
||||
class scales:
|
||||
termination = -0.8
|
||||
tracking_lin_vel = 3.0
|
||||
tracking_ang_vel = 1.5
|
||||
lin_vel_z = -0.1
|
||||
ang_vel_xy = -0.05
|
||||
orientation = 4
|
||||
torques = -0.0001
|
||||
dof_vel = -1e-7
|
||||
dof_acc = -1e-7
|
||||
base_height = -0.5
|
||||
feet_air_time = 0
|
||||
collision = -0.1
|
||||
feet_stumble = -0.1
|
||||
action_rate = -0.0002
|
||||
stand_still = -0.01
|
||||
dof_pos_limits = -0.9
|
||||
arm_pos = -0.
|
||||
hip_action_l2 = -0.1
|
||||
|
||||
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
|
||||
tracking_sigma = 0.4 # tracking reward = exp(-error^2/sigma)
|
||||
soft_dof_pos_limit = 0.9 # percentage of urdf limits, values above this limit are penalized
|
||||
soft_dof_vel_limit = 0.9
|
||||
soft_torque_limit = 1.
|
||||
base_height_target = 0.4
|
||||
max_contact_force = 200. # forces above this value are penalized
|
||||
|
||||
class control(LeggedRobotCfg_PRO.control):
|
||||
# PD Drive parameters:
|
||||
control_type = 'P'
|
||||
|
||||
stiffness = {'L1_Joint': 150.0, # [rad]
|
||||
'L_wheel_1_Joint':50.0, # [rad]
|
||||
'L2_Joint': 150.0, # [rad]
|
||||
'L_wheel_2_Joint':50.0, # [rad]
|
||||
|
||||
'R1_Joint': 150.0, # [rad]
|
||||
'R_wheel_1_Joint': 50.0, # [rad]
|
||||
'R2_Joint': 150.0,
|
||||
'R_wheel_2_Joint': 50.0,
|
||||
} # [N*m/rad]
|
||||
|
||||
damping = { 'L1_Joint': 5.0, # [rad]
|
||||
'L_wheel_1_Joint': 5.0, # [rad]
|
||||
'L2_Joint': 5.0, # [rad]
|
||||
'L_wheel_2_Joint': 5.0, # [rad]
|
||||
|
||||
'R1_Joint': 5.0, # [rad]
|
||||
'R_wheel_1_Joint': 5.0, # [rad]
|
||||
'R2_Joint': 5.0,
|
||||
'R_wheel_2_Joint': 5.0,
|
||||
} # [N*m*s/rad]
|
||||
# action scale: target angle = actionScale * action + defaultAngle
|
||||
action_scale = 0.25
|
||||
# decimation: Number of control action updates @ sim DT per policy DT
|
||||
decimation = 4
|
||||
|
||||
class asset(LeggedRobotCfg_PRO.asset):
|
||||
file = '/home/hivecore/HiveCode_rl_gym/resources/robots/Proto/urdf/proto.urdf' ######
|
||||
name = "proto"
|
||||
arm_name = ""
|
||||
foot_name = "wheel"
|
||||
wheel_name = ['L_wheel_1_Joint', 'L_wheel_2_Joint', 'R_wheel_1_Joint', 'R_wheel_2_Joint']
|
||||
penalize_contacts_on = ["base_link"]
|
||||
terminate_after_contacts_on = []
|
||||
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter "base","calf","hip","thigh"
|
||||
replace_cylinder_with_capsule = False
|
||||
flip_visual_attachments = False
|
||||
|
||||
|
||||
class PRORoughCfgPPO(LeggedRobotCfgPPO_PRO):
|
||||
class algorithm(LeggedRobotCfgPPO_PRO.algorithm):
|
||||
entropy_coef = 0.003
|
||||
|
||||
class runner(LeggedRobotCfgPPO_PRO.runner):
|
||||
run_name = ''
|
||||
experiment_name = 'proto'
|
||||
resume = False
|
||||
num_steps_per_env = 48 # per iteration
|
||||
max_iterations = 5000
|
||||
load_run = ''
|
||||
checkpoint = 0
|
||||
|
||||
|
||||
1036
legged_gym/envs/proto/pro_env.py
Normal file
1036
legged_gym/envs/proto/pro_env.py
Normal file
File diff suppressed because it is too large
Load Diff
52
legged_gym/scripts/play.py
Normal file
52
legged_gym/scripts/play.py
Normal file
@@ -0,0 +1,52 @@
|
||||
import sys
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
import os
|
||||
import sys
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR
|
||||
|
||||
import isaacgym
|
||||
from legged_gym.envs import *
|
||||
from legged_gym.utils import get_args, export_policy_as_jit, task_registry, Logger
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
|
||||
def play(args):
|
||||
env_cfg, train_cfg = task_registry.get_cfgs(name=args.task)
|
||||
# override some parameters for testing
|
||||
env_cfg.terrain.mesh_type = 'trimesh' ###rencent change
|
||||
env_cfg.env.num_envs = min(env_cfg.env.num_envs, 48)
|
||||
env_cfg.terrain.num_rows = 5
|
||||
env_cfg.terrain.num_cols = 5
|
||||
env_cfg.terrain.curriculum = False
|
||||
env_cfg.noise.add_noise = False
|
||||
env_cfg.domain_rand.randomize_friction = False
|
||||
env_cfg.domain_rand.push_robots = False
|
||||
|
||||
env_cfg.env.test = True
|
||||
|
||||
# prepare environment
|
||||
env, _ = task_registry.make_env(name=args.task, args=args, env_cfg=env_cfg)
|
||||
obs = env.get_observations()
|
||||
# load policy
|
||||
train_cfg.runner.resume = True
|
||||
ppo_runner, train_cfg = task_registry.make_alg_runner(env=env, name=args.task, args=args, train_cfg=train_cfg)
|
||||
policy = ppo_runner.get_inference_policy(device=env.device)
|
||||
|
||||
# export policy as a jit module (used to run it from C++)
|
||||
if EXPORT_POLICY:
|
||||
path = os.path.join(LEGGED_GYM_ROOT_DIR, 'logs', train_cfg.runner.experiment_name, 'exported', 'policies')
|
||||
export_policy_as_jit(ppo_runner.alg.actor_critic, path)
|
||||
print('Exported policy as jit script to: ', path)
|
||||
|
||||
for i in range(10*int(env.max_episode_length)):
|
||||
actions = policy(obs.detach())
|
||||
obs, _, rews, dones, infos = env.step(actions.detach())
|
||||
|
||||
if __name__ == '__main__':
|
||||
EXPORT_POLICY = True
|
||||
RECORD_FRAMES = False
|
||||
MOVE_CAMERA = False
|
||||
args = get_args()
|
||||
play(args)
|
||||
23
legged_gym/scripts/train.py
Normal file
23
legged_gym/scripts/train.py
Normal file
@@ -0,0 +1,23 @@
|
||||
import os
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
import sys
|
||||
|
||||
import isaacgym
|
||||
from legged_gym.envs import *
|
||||
from legged_gym.utils import get_args, task_registry
|
||||
import torch
|
||||
|
||||
def train(args):
|
||||
env, env_cfg = task_registry.make_env(name=args.task, args=args)
|
||||
ppo_runner, train_cfg = task_registry.make_alg_runner(env=env, name=args.task, args=args)
|
||||
ppo_runner.learn(num_learning_iterations=train_cfg.runner.max_iterations, init_at_random_ep_len=True)
|
||||
""" # 覆盖地形配置
|
||||
env_cfg.terrain.terrain_proportions = [0.0, 1.0, 0.0, 0.0, 0.0] # 100% 粗糙斜坡
|
||||
env_cfg.terrain.curriculum = False # 关闭课程学习
|
||||
env_cfg.terrain.roughness = 0.3 # 设置中等粗糙度
|
||||
env_cfg.terrain.vertical_scale = 0.005 # 设置垂直缩放 """
|
||||
|
||||
if __name__ == '__main__':
|
||||
args = get_args()
|
||||
train(args)
|
||||
5
legged_gym/utils/__init__.py
Normal file
5
legged_gym/utils/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
from .helpers import class_to_dict, get_load_path, get_args, export_policy_as_jit, set_seed, update_class_from_dict
|
||||
from .task_registry import task_registry
|
||||
from .logger import Logger
|
||||
from .math import *
|
||||
from .terrain import Terrain
|
||||
191
legged_gym/utils/helpers.py
Normal file
191
legged_gym/utils/helpers.py
Normal file
@@ -0,0 +1,191 @@
|
||||
import os
|
||||
import copy
|
||||
import torch
|
||||
import numpy as np
|
||||
import random
|
||||
from isaacgym import gymapi
|
||||
from isaacgym import gymutil
|
||||
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
|
||||
|
||||
def class_to_dict(obj) -> dict:
|
||||
if not hasattr(obj,"__dict__"):
|
||||
return obj
|
||||
result = {}
|
||||
for key in dir(obj):
|
||||
if key.startswith("_"):
|
||||
continue
|
||||
element = []
|
||||
val = getattr(obj, key)
|
||||
if isinstance(val, list):
|
||||
for item in val:
|
||||
element.append(class_to_dict(item))
|
||||
else:
|
||||
element = class_to_dict(val)
|
||||
result[key] = element
|
||||
return result
|
||||
|
||||
def update_class_from_dict(obj, dict):
|
||||
for key, val in dict.items():
|
||||
attr = getattr(obj, key, None)
|
||||
if isinstance(attr, type):
|
||||
update_class_from_dict(attr, val)
|
||||
else:
|
||||
setattr(obj, key, val)
|
||||
return
|
||||
|
||||
def set_seed(seed):
|
||||
if seed == -1:
|
||||
seed = np.random.randint(0, 10000)
|
||||
print("Setting seed: {}".format(seed))
|
||||
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
os.environ['PYTHONHASHSEED'] = str(seed)
|
||||
torch.cuda.manual_seed(seed)
|
||||
torch.cuda.manual_seed_all(seed)
|
||||
|
||||
def parse_sim_params(args, cfg):
|
||||
# code from Isaac Gym Preview 2
|
||||
# initialize sim params
|
||||
sim_params = gymapi.SimParams()
|
||||
|
||||
# set some values from args
|
||||
if args.physics_engine == gymapi.SIM_FLEX:
|
||||
if args.device != "cpu":
|
||||
print("WARNING: Using Flex with GPU instead of PHYSX!")
|
||||
elif args.physics_engine == gymapi.SIM_PHYSX:
|
||||
sim_params.physx.use_gpu = args.use_gpu
|
||||
sim_params.physx.num_subscenes = args.subscenes
|
||||
sim_params.use_gpu_pipeline = args.use_gpu_pipeline
|
||||
|
||||
# if sim options are provided in cfg, parse them and update/override above:
|
||||
if "sim" in cfg:
|
||||
gymutil.parse_sim_config(cfg["sim"], sim_params)
|
||||
|
||||
# Override num_threads if passed on the command line
|
||||
if args.physics_engine == gymapi.SIM_PHYSX and args.num_threads > 0:
|
||||
sim_params.physx.num_threads = args.num_threads
|
||||
|
||||
return sim_params
|
||||
|
||||
def get_load_path(root, load_run=-1, checkpoint=-1):
|
||||
try:
|
||||
runs = os.listdir(root)
|
||||
#TODO sort by date to handle change of month
|
||||
runs.sort()
|
||||
if 'exported' in runs: runs.remove('exported')
|
||||
last_run = os.path.join(root, runs[-1])
|
||||
except:
|
||||
raise ValueError("No runs in this directory: " + root)
|
||||
if load_run==-1:
|
||||
load_run = last_run
|
||||
else:
|
||||
load_run = os.path.join(root, load_run)
|
||||
|
||||
if checkpoint==-1:
|
||||
models = [file for file in os.listdir(load_run) if 'model' in file]
|
||||
models.sort(key=lambda m: '{0:0>15}'.format(m))
|
||||
model = models[-1]
|
||||
else:
|
||||
model = "model_{}.pt".format(checkpoint)
|
||||
|
||||
load_path = os.path.join(load_run, model)
|
||||
return load_path
|
||||
|
||||
def update_cfg_from_args(env_cfg, cfg_train, args):
|
||||
# seed
|
||||
if env_cfg is not None:
|
||||
# num envs
|
||||
if args.num_envs is not None:
|
||||
env_cfg.env.num_envs = args.num_envs
|
||||
if cfg_train is not None:
|
||||
if args.seed is not None:
|
||||
cfg_train.seed = args.seed
|
||||
# alg runner parameters
|
||||
if args.max_iterations is not None:
|
||||
cfg_train.runner.max_iterations = args.max_iterations
|
||||
if args.resume:
|
||||
cfg_train.runner.resume = args.resume
|
||||
if args.experiment_name is not None:
|
||||
cfg_train.runner.experiment_name = args.experiment_name
|
||||
if args.run_name is not None:
|
||||
cfg_train.runner.run_name = args.run_name
|
||||
if args.load_run is not None:
|
||||
cfg_train.runner.load_run = args.load_run
|
||||
if args.checkpoint is not None:
|
||||
cfg_train.runner.checkpoint = args.checkpoint
|
||||
|
||||
return env_cfg, cfg_train
|
||||
|
||||
def get_args():
|
||||
custom_parameters = [
|
||||
{"name": "--task", "type": str, "default": "go2", "help": "Resume training or start testing from a checkpoint. Overrides config file if provided."},
|
||||
{"name": "--resume", "action": "store_true", "default": False, "help": "Resume training from a checkpoint"},
|
||||
{"name": "--experiment_name", "type": str, "help": "Name of the experiment to run or load. Overrides config file if provided."},
|
||||
{"name": "--run_name", "type": str, "help": "Name of the run. Overrides config file if provided."},
|
||||
{"name": "--load_run", "type": str, "help": "Name of the run to load when resume=True. If -1: will load the last run. Overrides config file if provided."},
|
||||
{"name": "--checkpoint", "type": int, "help": "Saved model checkpoint number. If -1: will load the last checkpoint. Overrides config file if provided."},
|
||||
|
||||
{"name": "--headless", "action": "store_true", "default": False, "help": "Force display off at all times"},
|
||||
{"name": "--horovod", "action": "store_true", "default": False, "help": "Use horovod for multi-gpu training"},
|
||||
{"name": "--rl_device", "type": str, "default": "cuda:0", "help": 'Device used by the RL algorithm, (cpu, gpu, cuda:0, cuda:1 etc..)'},
|
||||
{"name": "--num_envs", "type": int, "help": "Number of environments to create. Overrides config file if provided."},
|
||||
{"name": "--seed", "type": int, "help": "Random seed. Overrides config file if provided."},
|
||||
{"name": "--max_iterations", "type": int, "help": "Maximum number of training iterations. Overrides config file if provided."},
|
||||
]
|
||||
# parse arguments
|
||||
args = gymutil.parse_arguments(
|
||||
description="RL Policy",
|
||||
custom_parameters=custom_parameters)
|
||||
|
||||
# name allignment
|
||||
args.sim_device_id = args.compute_device_id
|
||||
args.sim_device = args.sim_device_type
|
||||
if args.sim_device=='cuda':
|
||||
args.sim_device += f":{args.sim_device_id}"
|
||||
return args
|
||||
|
||||
def export_policy_as_jit(actor_critic, path):
|
||||
if hasattr(actor_critic, 'memory_a'):
|
||||
# assumes LSTM: TODO add GRU
|
||||
exporter = PolicyExporterLSTM(actor_critic)
|
||||
exporter.export(path)
|
||||
else:
|
||||
os.makedirs(path, exist_ok=True)
|
||||
path = os.path.join(path, 'policy_1.pt')
|
||||
model = copy.deepcopy(actor_critic.actor).to('cpu')
|
||||
traced_script_module = torch.jit.script(model)
|
||||
traced_script_module.save(path)
|
||||
|
||||
|
||||
class PolicyExporterLSTM(torch.nn.Module):
|
||||
def __init__(self, actor_critic):
|
||||
super().__init__()
|
||||
self.actor = copy.deepcopy(actor_critic.actor)
|
||||
self.is_recurrent = actor_critic.is_recurrent
|
||||
self.memory = copy.deepcopy(actor_critic.memory_a.rnn)
|
||||
self.memory.cpu()
|
||||
self.register_buffer(f'hidden_state', torch.zeros(self.memory.num_layers, 1, self.memory.hidden_size))
|
||||
self.register_buffer(f'cell_state', torch.zeros(self.memory.num_layers, 1, self.memory.hidden_size))
|
||||
|
||||
def forward(self, x):
|
||||
out, (h, c) = self.memory(x.unsqueeze(0), (self.hidden_state, self.cell_state))
|
||||
self.hidden_state[:] = h
|
||||
self.cell_state[:] = c
|
||||
return self.actor(out.squeeze(0))
|
||||
|
||||
@torch.jit.export
|
||||
def reset_memory(self):
|
||||
self.hidden_state[:] = 0.
|
||||
self.cell_state[:] = 0.
|
||||
|
||||
def export(self, path):
|
||||
os.makedirs(path, exist_ok=True)
|
||||
path = os.path.join(path, 'policy_lstm_1.pt')
|
||||
self.to('cpu')
|
||||
traced_script_module = torch.jit.script(self)
|
||||
traced_script_module.save(path)
|
||||
|
||||
|
||||
30
legged_gym/utils/isaacgym_utils.py
Normal file
30
legged_gym/utils/isaacgym_utils.py
Normal file
@@ -0,0 +1,30 @@
|
||||
import os
|
||||
import numpy as np
|
||||
import random
|
||||
import torch
|
||||
|
||||
@torch.jit.script
|
||||
def copysign(a, b):
|
||||
# type: (float, Tensor) -> Tensor
|
||||
a = torch.tensor(a, device=b.device, dtype=torch.float).repeat(b.shape[0])
|
||||
return torch.abs(a) * torch.sign(b)
|
||||
def get_euler_xyz(q):
|
||||
qx, qy, qz, qw = 0, 1, 2, 3
|
||||
# roll (x-axis rotation)
|
||||
sinr_cosp = 2.0 * (q[:, qw] * q[:, qx] + q[:, qy] * q[:, qz])
|
||||
cosr_cosp = q[:, qw] * q[:, qw] - q[:, qx] * \
|
||||
q[:, qx] - q[:, qy] * q[:, qy] + q[:, qz] * q[:, qz]
|
||||
roll = torch.atan2(sinr_cosp, cosr_cosp)
|
||||
|
||||
# pitch (y-axis rotation)
|
||||
sinp = 2.0 * (q[:, qw] * q[:, qy] - q[:, qz] * q[:, qx])
|
||||
pitch = torch.where(
|
||||
torch.abs(sinp) >= 1, copysign(np.pi / 2.0, sinp), torch.asin(sinp))
|
||||
|
||||
# yaw (z-axis rotation)
|
||||
siny_cosp = 2.0 * (q[:, qw] * q[:, qz] + q[:, qx] * q[:, qy])
|
||||
cosy_cosp = q[:, qw] * q[:, qw] + q[:, qx] * \
|
||||
q[:, qx] - q[:, qy] * q[:, qy] - q[:, qz] * q[:, qz]
|
||||
yaw = torch.atan2(siny_cosp, cosy_cosp)
|
||||
|
||||
return torch.stack((roll, pitch, yaw), dim=-1)
|
||||
39
legged_gym/utils/logger.py
Normal file
39
legged_gym/utils/logger.py
Normal file
@@ -0,0 +1,39 @@
|
||||
import numpy as np
|
||||
from collections import defaultdict
|
||||
from multiprocessing import Process, Value
|
||||
|
||||
class Logger:
|
||||
def __init__(self, dt):
|
||||
self.state_log = defaultdict(list)
|
||||
self.rew_log = defaultdict(list)
|
||||
self.dt = dt
|
||||
self.num_episodes = 0
|
||||
self.plot_process = None
|
||||
|
||||
def log_state(self, key, value):
|
||||
self.state_log[key].append(value)
|
||||
|
||||
def log_states(self, dict):
|
||||
for key, value in dict.items():
|
||||
self.log_state(key, value)
|
||||
|
||||
def log_rewards(self, dict, num_episodes):
|
||||
for key, value in dict.items():
|
||||
if 'rew' in key:
|
||||
self.rew_log[key].append(value.item() * num_episodes)
|
||||
self.num_episodes += num_episodes
|
||||
|
||||
def reset(self):
|
||||
self.state_log.clear()
|
||||
self.rew_log.clear()
|
||||
|
||||
def print_rewards(self):
|
||||
print("Average rewards per second:")
|
||||
for key, values in self.rew_log.items():
|
||||
mean = np.sum(np.array(values)) / self.num_episodes
|
||||
print(f" - {key}: {mean}")
|
||||
print(f"Total number of episodes: {self.num_episodes}")
|
||||
|
||||
def __del__(self):
|
||||
if self.plot_process is not None:
|
||||
self.plot_process.kill()
|
||||
26
legged_gym/utils/math.py
Normal file
26
legged_gym/utils/math.py
Normal file
@@ -0,0 +1,26 @@
|
||||
import torch
|
||||
from torch import Tensor
|
||||
import numpy as np
|
||||
from isaacgym.torch_utils import quat_apply, normalize
|
||||
from typing import Tuple
|
||||
|
||||
# @ torch.jit.script
|
||||
def quat_apply_yaw(quat, vec):
|
||||
quat_yaw = quat.clone().view(-1, 4)
|
||||
quat_yaw[:, :2] = 0.
|
||||
quat_yaw = normalize(quat_yaw)
|
||||
return quat_apply(quat_yaw, vec)
|
||||
|
||||
# @ torch.jit.script
|
||||
def wrap_to_pi(angles):
|
||||
angles %= 2*np.pi
|
||||
angles -= 2*np.pi * (angles > np.pi)
|
||||
return angles
|
||||
|
||||
# @ torch.jit.script
|
||||
def torch_rand_sqrt_float(lower, upper, shape, device):
|
||||
# type: (float, float, Tuple[int, int], str) -> Tensor
|
||||
r = 2*torch.rand(*shape, device=device) - 1
|
||||
r = torch.where(r<0., -torch.sqrt(-r), torch.sqrt(r))
|
||||
r = (r + 1.) / 2.
|
||||
return (upper - lower) * r + lower
|
||||
129
legged_gym/utils/task_registry.py
Normal file
129
legged_gym/utils/task_registry.py
Normal file
@@ -0,0 +1,129 @@
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import Tuple
|
||||
import torch
|
||||
import numpy as np
|
||||
import sys
|
||||
|
||||
from rsl_rl.env import VecEnv
|
||||
from rsl_rl.runners import OnPolicyRunner
|
||||
|
||||
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
|
||||
from .helpers import get_args, update_cfg_from_args, class_to_dict, get_load_path, set_seed, parse_sim_params
|
||||
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg, LeggedRobotCfgPPO
|
||||
|
||||
class TaskRegistry():
|
||||
def __init__(self):
|
||||
self.task_classes = {}
|
||||
self.env_cfgs = {}
|
||||
self.train_cfgs = {}
|
||||
|
||||
def register(self, name: str, task_class: VecEnv, env_cfg: LeggedRobotCfg, train_cfg: LeggedRobotCfgPPO):
|
||||
self.task_classes[name] = task_class
|
||||
self.env_cfgs[name] = env_cfg
|
||||
self.train_cfgs[name] = train_cfg
|
||||
|
||||
def get_task_class(self, name: str) -> VecEnv:
|
||||
return self.task_classes[name]
|
||||
|
||||
def get_cfgs(self, name) -> Tuple[LeggedRobotCfg, LeggedRobotCfgPPO]:
|
||||
train_cfg = self.train_cfgs[name]
|
||||
env_cfg = self.env_cfgs[name]
|
||||
# copy seed
|
||||
env_cfg.seed = train_cfg.seed
|
||||
return env_cfg, train_cfg
|
||||
|
||||
def make_env(self, name, args=None, env_cfg=None) -> Tuple[VecEnv, LeggedRobotCfg]:
|
||||
""" Creates an environment either from a registered namme or from the provided config file.
|
||||
|
||||
Args:
|
||||
name (string): Name of a registered env.
|
||||
args (Args, optional): Isaac Gym comand line arguments. If None get_args() will be called. Defaults to None.
|
||||
env_cfg (Dict, optional): Environment config file used to override the registered config. Defaults to None.
|
||||
|
||||
Raises:
|
||||
ValueError: Error if no registered env corresponds to 'name'
|
||||
|
||||
Returns:
|
||||
isaacgym.VecTaskPython: The created environment
|
||||
Dict: the corresponding config file
|
||||
"""
|
||||
# if no args passed get command line arguments
|
||||
if args is None:
|
||||
args = get_args()
|
||||
# check if there is a registered env with that name
|
||||
if name in self.task_classes:
|
||||
task_class = self.get_task_class(name)
|
||||
else:
|
||||
raise ValueError(f"Task with name: {name} was not registered")
|
||||
if env_cfg is None:
|
||||
# load config files
|
||||
env_cfg, _ = self.get_cfgs(name)
|
||||
# override cfg from args (if specified)
|
||||
env_cfg, _ = update_cfg_from_args(env_cfg, None, args)
|
||||
set_seed(env_cfg.seed)
|
||||
# parse sim params (convert to dict first)
|
||||
sim_params = {"sim": class_to_dict(env_cfg.sim)}
|
||||
sim_params = parse_sim_params(args, sim_params)
|
||||
env = task_class( cfg=env_cfg,
|
||||
sim_params=sim_params,
|
||||
physics_engine=args.physics_engine,
|
||||
sim_device=args.sim_device,
|
||||
headless=args.headless)
|
||||
return env, env_cfg
|
||||
|
||||
def make_alg_runner(self, env, name=None, args=None, train_cfg=None, log_root="default") -> Tuple[OnPolicyRunner, LeggedRobotCfgPPO]:
|
||||
""" Creates the training algorithm either from a registered namme or from the provided config file.
|
||||
|
||||
Args:
|
||||
env (isaacgym.VecTaskPython): The environment to train (TODO: remove from within the algorithm)
|
||||
name (string, optional): Name of a registered env. If None, the config file will be used instead. Defaults to None.
|
||||
args (Args, optional): Isaac Gym comand line arguments. If None get_args() will be called. Defaults to None.
|
||||
train_cfg (Dict, optional): Training config file. If None 'name' will be used to get the config file. Defaults to None.
|
||||
log_root (str, optional): Logging directory for Tensorboard. Set to 'None' to avoid logging (at test time for example).
|
||||
Logs will be saved in <log_root>/<date_time>_<run_name>. Defaults to "default"=<path_to_LEGGED_GYM>/logs/<experiment_name>.
|
||||
|
||||
Raises:
|
||||
ValueError: Error if neither 'name' or 'train_cfg' are provided
|
||||
Warning: If both 'name' or 'train_cfg' are provided 'name' is ignored
|
||||
|
||||
Returns:
|
||||
PPO: The created algorithm
|
||||
Dict: the corresponding config file
|
||||
"""
|
||||
# if no args passed get command line arguments
|
||||
if args is None:
|
||||
args = get_args()
|
||||
# if config files are passed use them, otherwise load from the name
|
||||
if train_cfg is None:
|
||||
if name is None:
|
||||
raise ValueError("Either 'name' or 'train_cfg' must be not None")
|
||||
# load config files
|
||||
_, train_cfg = self.get_cfgs(name)
|
||||
else:
|
||||
if name is not None:
|
||||
print(f"'train_cfg' provided -> Ignoring 'name={name}'")
|
||||
# override cfg from args (if specified)
|
||||
_, train_cfg = update_cfg_from_args(None, train_cfg, args)
|
||||
|
||||
if log_root=="default":
|
||||
log_root = os.path.join(LEGGED_GYM_ROOT_DIR, 'logs', train_cfg.runner.experiment_name)
|
||||
log_dir = os.path.join(log_root, datetime.now().strftime('%b%d_%H-%M-%S') + '_' + train_cfg.runner.run_name)
|
||||
elif log_root is None:
|
||||
log_dir = None
|
||||
else:
|
||||
log_dir = os.path.join(log_root, datetime.now().strftime('%b%d_%H-%M-%S') + '_' + train_cfg.runner.run_name)
|
||||
|
||||
train_cfg_dict = class_to_dict(train_cfg)
|
||||
runner = OnPolicyRunner(env, train_cfg_dict, log_dir, device=args.rl_device)
|
||||
#save resume path before creating a new log_dir
|
||||
resume = train_cfg.runner.resume
|
||||
if resume:
|
||||
# load previously trained model
|
||||
resume_path = get_load_path(log_root, load_run=train_cfg.runner.load_run, checkpoint=train_cfg.runner.checkpoint)
|
||||
print(f"Loading model from: {resume_path}")
|
||||
runner.load(resume_path)
|
||||
return runner, train_cfg
|
||||
|
||||
# make global task registry
|
||||
task_registry = TaskRegistry()
|
||||
157
legged_gym/utils/terrain.py
Normal file
157
legged_gym/utils/terrain.py
Normal file
@@ -0,0 +1,157 @@
|
||||
import numpy as np
|
||||
from numpy.random import choice
|
||||
from scipy import interpolate
|
||||
|
||||
from isaacgym import terrain_utils
|
||||
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg
|
||||
|
||||
class Terrain:
|
||||
def __init__(self, cfg: LeggedRobotCfg.terrain, num_robots) -> None:
|
||||
|
||||
self.cfg = cfg
|
||||
self.num_robots = num_robots
|
||||
self.type = cfg.mesh_type
|
||||
if self.type in ["none", 'plane']:
|
||||
return
|
||||
self.env_length = cfg.terrain_length
|
||||
self.env_width = cfg.terrain_width
|
||||
self.proportions = [np.sum(cfg.terrain_proportions[:i+1]) for i in range(len(cfg.terrain_proportions))]
|
||||
|
||||
self.cfg.num_sub_terrains = cfg.num_rows * cfg.num_cols
|
||||
self.env_origins = np.zeros((cfg.num_rows, cfg.num_cols, 3))
|
||||
|
||||
self.width_per_env_pixels = int(self.env_width / cfg.horizontal_scale)
|
||||
self.length_per_env_pixels = int(self.env_length / cfg.horizontal_scale)
|
||||
|
||||
self.border = int(cfg.border_size/self.cfg.horizontal_scale)
|
||||
self.tot_cols = int(cfg.num_cols * self.width_per_env_pixels) + 2 * self.border
|
||||
self.tot_rows = int(cfg.num_rows * self.length_per_env_pixels) + 2 * self.border
|
||||
|
||||
self.height_field_raw = np.zeros((self.tot_rows , self.tot_cols), dtype=np.int16)
|
||||
if cfg.curriculum:
|
||||
self.curiculum()
|
||||
elif cfg.selected:
|
||||
self.selected_terrain()
|
||||
else:
|
||||
self.randomized_terrain()
|
||||
|
||||
self.heightsamples = self.height_field_raw
|
||||
if self.type=="trimesh":
|
||||
self.vertices, self.triangles = terrain_utils.convert_heightfield_to_trimesh( self.height_field_raw,
|
||||
self.cfg.horizontal_scale,
|
||||
self.cfg.vertical_scale,
|
||||
self.cfg.slope_treshold)
|
||||
|
||||
def randomized_terrain(self):
|
||||
for k in range(self.cfg.num_sub_terrains):
|
||||
# Env coordinates in the world
|
||||
(i, j) = np.unravel_index(k, (self.cfg.num_rows, self.cfg.num_cols))
|
||||
|
||||
choice = np.random.uniform(0, 1)
|
||||
difficulty = np.random.choice([0.5, 0.75, 0.9])
|
||||
terrain = self.make_terrain(choice, difficulty)
|
||||
self.add_terrain_to_map(terrain, i, j)
|
||||
|
||||
def curiculum(self):
|
||||
for j in range(self.cfg.num_cols):
|
||||
for i in range(self.cfg.num_rows):
|
||||
difficulty = i / self.cfg.num_rows
|
||||
choice = j / self.cfg.num_cols + 0.001
|
||||
|
||||
terrain = self.make_terrain(choice, difficulty)
|
||||
self.add_terrain_to_map(terrain, i, j)
|
||||
|
||||
def selected_terrain(self):
|
||||
terrain_type = self.cfg.terrain_kwargs.pop('type')
|
||||
for k in range(self.cfg.num_sub_terrains):
|
||||
# Env coordinates in the world
|
||||
(i, j) = np.unravel_index(k, (self.cfg.num_rows, self.cfg.num_cols))
|
||||
|
||||
terrain = terrain_utils.SubTerrain("terrain",
|
||||
width=self.width_per_env_pixels,
|
||||
length=self.width_per_env_pixels,
|
||||
vertical_scale=self.vertical_scale,
|
||||
horizontal_scale=self.horizontal_scale)
|
||||
|
||||
eval(terrain_type)(terrain, **self.cfg.terrain_kwargs.terrain_kwargs)
|
||||
self.add_terrain_to_map(terrain, i, j)
|
||||
|
||||
def make_terrain(self, choice, difficulty):
|
||||
terrain = terrain_utils.SubTerrain( "terrain",
|
||||
width=self.width_per_env_pixels,
|
||||
length=self.width_per_env_pixels,
|
||||
vertical_scale=self.cfg.vertical_scale,
|
||||
horizontal_scale=self.cfg.horizontal_scale)
|
||||
slope = difficulty * 0.4
|
||||
step_height = 0.05 + 0.18 * difficulty
|
||||
discrete_obstacles_height = 0.05 + difficulty * 0.2
|
||||
stepping_stones_size = 1.5 * (1.05 - difficulty)
|
||||
stone_distance = 0.05 if difficulty==0 else 0.1
|
||||
gap_size = 1. * difficulty
|
||||
pit_depth = 1. * difficulty
|
||||
if choice < self.proportions[0]:
|
||||
if choice < self.proportions[0]/ 2:
|
||||
slope *= -1
|
||||
terrain_utils.pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
|
||||
elif choice < self.proportions[1]:
|
||||
terrain_utils.pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
|
||||
terrain_utils.random_uniform_terrain(terrain, min_height=-0.05, max_height=0.05, step=0.005, downsampled_scale=0.2)
|
||||
elif choice < self.proportions[3]:
|
||||
if choice<self.proportions[2]:
|
||||
step_height *= -1
|
||||
terrain_utils.pyramid_stairs_terrain(terrain, step_width=0.31, step_height=step_height, platform_size=3.)
|
||||
elif choice < self.proportions[4]:
|
||||
num_rectangles = 20
|
||||
rectangle_min_size = 1.
|
||||
rectangle_max_size = 2.
|
||||
terrain_utils.discrete_obstacles_terrain(terrain, discrete_obstacles_height, rectangle_min_size, rectangle_max_size, num_rectangles, platform_size=3.)
|
||||
elif choice < self.proportions[5]:
|
||||
terrain_utils.stepping_stones_terrain(terrain, stone_size=stepping_stones_size, stone_distance=stone_distance, max_height=0., platform_size=4.)
|
||||
elif choice < self.proportions[6]:
|
||||
gap_terrain(terrain, gap_size=gap_size, platform_size=3.)
|
||||
else:
|
||||
pit_terrain(terrain, depth=pit_depth, platform_size=4.)
|
||||
|
||||
return terrain
|
||||
|
||||
def add_terrain_to_map(self, terrain, row, col):
|
||||
i = row
|
||||
j = col
|
||||
# map coordinate system
|
||||
start_x = self.border + i * self.length_per_env_pixels
|
||||
end_x = self.border + (i + 1) * self.length_per_env_pixels
|
||||
start_y = self.border + j * self.width_per_env_pixels
|
||||
end_y = self.border + (j + 1) * self.width_per_env_pixels
|
||||
self.height_field_raw[start_x: end_x, start_y:end_y] = terrain.height_field_raw
|
||||
|
||||
env_origin_x = (i + 0.5) * self.env_length
|
||||
env_origin_y = (j + 0.5) * self.env_width
|
||||
x1 = int((self.env_length/2. - 1) / terrain.horizontal_scale)
|
||||
x2 = int((self.env_length/2. + 1) / terrain.horizontal_scale)
|
||||
y1 = int((self.env_width/2. - 1) / terrain.horizontal_scale)
|
||||
y2 = int((self.env_width/2. + 1) / terrain.horizontal_scale)
|
||||
env_origin_z = np.max(terrain.height_field_raw[x1:x2, y1:y2])*terrain.vertical_scale
|
||||
self.env_origins[i, j] = [env_origin_x, env_origin_y, env_origin_z]
|
||||
|
||||
def gap_terrain(terrain, gap_size, platform_size=1.):
|
||||
gap_size = int(gap_size / terrain.horizontal_scale)
|
||||
platform_size = int(platform_size / terrain.horizontal_scale)
|
||||
|
||||
center_x = terrain.length // 2
|
||||
center_y = terrain.width // 2
|
||||
x1 = (terrain.length - platform_size) // 2
|
||||
x2 = x1 + gap_size
|
||||
y1 = (terrain.width - platform_size) // 2
|
||||
y2 = y1 + gap_size
|
||||
|
||||
terrain.height_field_raw[center_x-x2 : center_x + x2, center_y-y2 : center_y + y2] = -1000
|
||||
terrain.height_field_raw[center_x-x1 : center_x + x1, center_y-y1 : center_y + y1] = 0
|
||||
|
||||
def pit_terrain(terrain, depth, platform_size=1.):
|
||||
depth = int(depth / terrain.vertical_scale)
|
||||
platform_size = int(platform_size / terrain.horizontal_scale / 2)
|
||||
x1 = terrain.length // 2 - platform_size
|
||||
x2 = terrain.length // 2 + platform_size
|
||||
y1 = terrain.width // 2 - platform_size
|
||||
y2 = terrain.width // 2 + platform_size
|
||||
terrain.height_field_raw[x1:x2, y1:y2] = -depth
|
||||
BIN
resources/robots/FHrobot/meshes/LF_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/LF_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/LR_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/LR_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_behind_wheel_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_behind_wheel_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_elbow_roll_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_elbow_roll_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_front_wheel_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_front_wheel_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_hip_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_hip_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_knee_behind_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_knee_behind_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_knee_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_knee_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_shoulder_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_shoulder_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_shoulder_roll_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_shoulder_roll_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_wrist_1_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_wrist_1_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_wrist_2_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_wrist_2_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/L_wrist_3_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/L_wrist_3_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/RF_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/RF_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/RR_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/RR_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_behind_wheel_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_behind_wheel_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_elbow_roll_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_elbow_roll_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_front_wheel_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_front_wheel_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_hip_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_hip_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_knee_behind_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_knee_behind_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_knee_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_knee_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_shoulder_pitch_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_shoulder_pitch_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_shoulder_roll_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_shoulder_roll_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_wrist_1_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_wrist_1_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_wrist_2_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_wrist_2_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/R_wrist_3_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/R_wrist_3_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/base_link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/base_link.STL
Normal file
Binary file not shown.
BIN
resources/robots/FHrobot/meshes/body_Link.STL
Normal file
BIN
resources/robots/FHrobot/meshes/body_Link.STL
Normal file
Binary file not shown.
29
resources/robots/FHrobot/urdf/FHrobot.csv
Normal file
29
resources/robots/FHrobot/urdf/FHrobot.csv
Normal file
@@ -0,0 +1,29 @@
|
||||
Link Name,Center of Mass X,Center of Mass Y,Center of Mass Z,Center of Mass Roll,Center of Mass Pitch,Center of Mass Yaw,Mass,Moment Ixx,Moment Ixy,Moment Ixz,Moment Iyy,Moment Iyz,Moment Izz,Visual X,Visual Y,Visual Z,Visual Roll,Visual Pitch,Visual Yaw,Mesh Filename,Color Red,Color Green,Color Blue,Color Alpha,Collision X,Collision Y,Collision Z,Collision Roll,Collision Pitch,Collision Yaw,Collision Mesh Filename,Material Name,SW Components,Coordinate System,Axis Name,Joint Name,Joint Type,Joint Origin X,Joint Origin Y,Joint Origin Z,Joint Origin Roll,Joint Origin Pitch,Joint Origin Yaw,Parent,Joint Axis X,Joint Axis Y,Joint Axis Z,Limit Effort,Limit Velocity,Limit Lower,Limit Upper,Calibration rising,Calibration falling,Dynamics Damping,Dynamics Friction,Safety Soft Upper,Safety Soft Lower,Safety K Position,Safety K Velocity
|
||||
base_link,-0.0287243195196827,-4.55769808200002E-18,0.111263782352098,0,0,0,9.16195316997428,0.0803675899417412,2.57637283448399E-05,0.000557107360733831,0.123149802547816,6.77009172082773E-06,0.174026115761104,0,0,0,0,0,0,package://FHrobot/meshes/base_link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/base_link.STL,,lowerbody-1/base_-2;lowerbody-1/Slewing bearing-118-M2-1;lowerbody-1/eRob110H××I-XX-18EX_V4_MC2B.BX.XX-2;lowerbody-1/eRob110H××I-XX-18EX_V4_MC2B.BX.XX-1;Trunk assembly-1/Lumbar joint fixation seat-1,base_link,,,,0,0,0,0,0,0,,0,0,0,,,,,,,,,,,,
|
||||
LF_Link,-0.0443902011079814,0.0141011862798264,-0.0323586384511667,0,0,0,1.97567139051559,0.00473523786051502,-0.000235716568962365,0.000142478654805334,0.00893546862693107,-9.02555579349623E-06,0.0112165608355098,0,0,0,0,0,0,package://FHrobot/meshes/LF_Link.STL,0.501960784313725,0.501960784313725,0.501960784313725,1,0,0,0,0,0,0,package://FHrobot/meshes/LF_Link.STL,,lowerbody-1/Sim-MirrorLeg Assembly Back Right-2/Sim-MirrorEnd Plate for Rotary-1,left_roll_hip,A_left_roll_hip,LF_Joint,revolute,0.1314,0.115,0.095,0,0,0,base_link,1,0,0,1,1,-0.3,0.3,,,,,,,,
|
||||
L_hip_pitch_Link,0.0597259187631695,-0.0332703104849395,-0.124428919154413,0,0,0,1.12081146225495,0.00805066152852551,1.17336246824337E-18,0.00330634654528937,0.0098260963270831,1.3519454485417E-18,0.00273912931741386,0,0,0,0,0,0,package://FHrobot/meshes/L_hip_pitch_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/L_hip_pitch_Link.STL,,lowerbody-1/wheel_Leg_LF-5/Thigh_F-1,left_front_hip,A_left_front_hip,L_hip_pitch_Joint,revolute,0,0.1558,-0.041275,0,0,0,LF_Link,0,1,0,1,1,-3,0.5,,,,,,,,
|
||||
L_knee_pitch_Link,0.0545948249645601,-0.0100000000000001,-0.113818949186922,0,0,0,0.475285576963195,0.00439728615328469,1.23343528982619E-18,0.00202943245472276,0.00548213594249869,-4.74309305798384E-19,0.00113978327998144,0,0,0,0,0,0,package://FHrobot/meshes/L_knee_pitch_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/L_knee_pitch_Link.STL,,lowerbody-1/wheel_Leg_LF-5/Calf_F-1,left_front_leg,A_left_front_leg,L_knee_pitch_Joint,revolute,0.1384,-0.02327,-0.28853,0,0,0,L_hip_pitch_Link,0,1,0,2,2,0,2,,,,,,,,
|
||||
L_front_wheel_Link,0,0.0259771265686115,3.33066907387547E-16,0,0,0,1.31612615806478,0.00355922247434463,2.36005296599081E-17,-1.84385131882247E-19,0.00676350779312777,9.56947720770833E-18,0.00355922247434463,0,0,0,0,0,0,package://FHrobot/meshes/L_front_wheel_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_front_wheel_Link.STL,,lowerbody-1/wheel_Leg_LF-5/Omni-wheel-1,left_front_wheel,A_left_front_wheel,L_front_wheel_Joint,continuous,0.1384,-0.006,-0.28853,0,0,0,L_knee_pitch_Link,0,1,0,,,,,,,,,,,,
|
||||
RF_Link,-0.0443902011079816,-0.0141011862798257,-0.0323586384511667,0,0,0,1.97567139051559,0.00473523786051503,0.000235716568962423,0.000142478654805336,0.00893546862693106,9.02555579347833E-06,0.0112165608355098,0,0,0,0,0,0,package://FHrobot/meshes/RF_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/RF_Link.STL,,lowerbody-1/Leg Assembly Back Right-1/End Plate for Rotary-1,right_roll_hip,A_right_roll_hip,RF_Joint,revolute,0.1314,-0.115,0.095,0,0,0,base_link,1,0,0,1,1,-0.3,0.3,,,,,,,,
|
||||
R_hip_pitch_Link,0.0596578324320381,-0.0363499999999999,-0.124461577699466,0,0,0,1.12081146225495,0.00804797542540444,2.92875925569726E-18,0.00330850231761244,0.0098260963270831,-1.89997453802926E-18,0.00274181542053492,0,0,0,0,0,0,package://FHrobot/meshes/R_hip_pitch_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/R_hip_pitch_Link.STL,,lowerbody-1/wheel_Leg_RF-2/Thigh_F-3,right_front_hip,A_right_front_hip,R_hip_pitch_Link,revolute,0,-0.086175,-0.041275,0,0,0,RF_Link,0,1,0,1,1,0,1,,,,,,,,
|
||||
R_knee_pitch_Link,0.0545948249645604,0.00999999999999993,-0.113818949186922,0,0,0,0.475285576963195,0.0043972861532847,7.31094846806523E-19,0.00202943245472277,0.0054821359424987,3.94170177843841E-19,0.00113978327998144,0,0,0,0,0,0,package://FHrobot/meshes/R_knee_pitch_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/R_knee_pitch_Link.STL,,lowerbody-1/wheel_Leg_RF-2/Calf_F-3,right_front_leg,A_right_front_leg,R_knee_pitch_Joint,revolute,0.1384,-0.04635,-0.28853,0,0,0,R_hip_pitch_Link,0,1,0,2,2,0,0.2,,,,,,,,
|
||||
R_front_wheel_Link,1.66533453693773E-16,-0.0260228734313837,1.11022302462516E-16,0,0,0,1.31612615806478,0.00355922247434463,2.79444531444272E-17,-8.3597863784879E-20,0.00676350779312777,1.58523064049009E-17,0.00355922247434462,0,0,0,0,0,0,package://FHrobot/meshes/R_front_wheel_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_front_wheel_Link.STL,,lowerbody-1/wheel_Leg_RF-2/Omni-wheel-3,right_front_wheel,A_right_front_wheel,R_front_wheel_Joint,continuous,0.1384,0.006,-0.28853,0,0,0,R_knee_pitch_Link,0,1,0,,,,,,,,,,,,
|
||||
LR_Link,-0.0481926144051435,0.0229637362457248,-0.132408119862618,0,0,0,1.17131532161649,0.0137577031338507,-2.61773303070726E-05,-0.00450691427712793,0.0162677645962514,-7.19216239202983E-05,0.00301544057128774,0,0,0,0,0,0,package://FHrobot/meshes/LR_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/LR_Link.STL,,lowerbody-1/wheel_Leg_LH-1/thigh2-4,left_behind_hip,A_left_behind_hip,LR_Joint,revolute,-0.145,0.17761,0.07,0,0,0,base_link,0,1,0,1,1,0,1,,,,,,,,
|
||||
L_knee_behind_Link,-5.55111512312578E-17,0.00818265937033372,-0.248551179372465,0,0,0,1.57098256619605,0.013018236619212,-1.54600762627302E-18,-3.04760889843994E-18,0.0140297977644304,0.000220625667770448,0.0028266272313598,0,0,0,0,0,0,package://FHrobot/meshes/L_knee_behind_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/L_knee_behind_Link.STL,,lowerbody-1/wheel_Leg_LH-1/calf2-3,left_behind_leg,A_left_behind_leg,L_knee_behind_Joint,prismatic,-0.11845,0.0225,-0.32543,0,0.34907,0,LR_Link,0,0,1,1,1,0,0.3,,,,,,,,
|
||||
L_behind_wheel_Link,5.55111512312578E-17,0.00510352638801745,0,0,0,0,0.464948384073087,0.00168993604286768,8.38861524157885E-18,-2.17711290245964E-19,0.00324721318025415,6.03650519256843E-18,0.00168993604286768,0,0,0,0,0,0,package://FHrobot/meshes/L_behind_wheel_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_behind_wheel_Link.STL,,lowerbody-1/wheel_Leg_LH-1/Wheel-1,left_behind_wheel,A_left_behind_wheel,L_behind_wheel_Joint,continuous,0,0.031,-0.28669,0,-0.34907,0,L_knee_behind_Link,0,1,0,,,,,,,,,,,,
|
||||
RR_Link,-0.0481926144051438,-0.0229637362457235,-0.132408119862618,0,0,0,1.17131532161649,0.0137577031338507,2.61773303071367E-05,-0.00450691427712793,0.0162677645962514,7.19216239204504E-05,0.00301544057128775,0,0,0,0,0,0,package://FHrobot/meshes/RR_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/RR_Link.STL,,lowerbody-1/wheel_Leg_RH-1/thigh2-3,right_behind_hip,A_right_behind_hip,RR_Joint,revolute,-0.145,-0.178,0.07,0,0,0,base_link,0,1,0,2,2,0,2,,,,,,,,
|
||||
R_knee_behind_Link,3.88578058618805E-16,-0.00818265937033386,-0.248551179372464,0,0,0,1.57098256619605,0.013018236619212,-2.86553684822791E-20,7.53403511319454E-18,0.0140297977644304,-0.000220625667770451,0.0028266272313598,0,0,0,0,0,0,package://FHrobot/meshes/R_knee_behind_Link.STL,0.792156862745098,0.819607843137255,0.933333333333333,1,0,0,0,0,0,0,package://FHrobot/meshes/R_knee_behind_Link.STL,,lowerbody-1/wheel_Leg_RH-1/calf2-3,right_behind_leg,A_right_behind_leg,R_knee_behind_Joint,prismatic,-0.11845,-0.0225,-0.32543,0,0.34907,0,RR_Link,0,0,1,1,1,0,0.3,,,,,,,,
|
||||
R_behind_wheel_Link,-1.11022302462516E-16,-0.00510352638801681,0,0,0,0,0.464948384073087,0.00168993604286761,1.12016259039502E-17,7.4580258389042E-20,0.00324721318025415,4.99357578518259E-18,0.00168993604286761,0,0,0,0,0,0,package://FHrobot/meshes/R_behind_wheel_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_behind_wheel_Link.STL,,lowerbody-1/wheel_Leg_RH-1/Wheel-1,right_behind_wheel,A_right_behind_wheel,R_behind_wheel_Joint,continuous,0,-0.031,-0.28669,0,-0.34907,0,R_knee_behind_Link,0,1,0,,,,,,,,,,,,
|
||||
body_Link,-0.00882910108062327,-0.000118133344383122,0.245151401628381,0,0,0,10.0801547715381,0.0773569211714163,-3.85496414837815E-05,1.2709668085755E-06,0.0716250463352797,-1.17181446359426E-06,0.0345399387103509,0,0,0,0,0,0,package://FHrobot/meshes/body_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/body_Link.STL,,Trunk assembly-1/Trunk frame-1/Trunk components2-2;Trunk assembly-1/Trunk frame-1/Trunk components1-2;Trunk assembly-1/orin-1;Trunk assembly-1/Reducer-1;Trunk assembly-1/Trunk frame-1/Trunk components1-1;Trunk assembly-1/Trunk frame-1/Trunk components2-1;Trunk assembly-1/Trunk frame-1/Trunk components3-1;Trunk assembly-1/Trunk frame-1/Corner brace-1;Trunk assembly-1/Trunk frame-1/Corner brace-2;Trunk assembly-1/Trunk frame-1/Arm installation part-1;Trunk assembly-1/Trunk frame-1/Arm installation part-2;Trunk assembly-1/Trunk frame-1;Head-1;Trunk assembly-1/Arm_L-1/joint1-1;Trunk assembly-1/Arm_L-2/joint1-1,waist,A_waist_pitch,body_Joint,revolute,0,0,0.322,0,0,0,base_link,0,1,0,1,1,-1,1,,,,,,,,
|
||||
L_shoulder_pitch_Link,-0.0207752848944858,-0.0254611115358671,1.62883253995716E-07,0,0,0,0.593541990378474,0.00117600830778555,0.000219273165281773,-5.14079286085665E-09,0.000566971203515922,6.16182902431345E-09,0.0012661146425649,0,0,0,0,0,0,package://FHrobot/meshes/L_shoulder_pitch_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_shoulder_pitch_Link.STL,,Trunk assembly-1/Arm_L-2/joint2-1,shoulder_left,A_shoulder_pitch,L_shoulder_pitch_Joint,revolute,-0.0034455,0.26248,0.401,0,0,0,body_Link,0,1,0,1,1,-3,3,,,,,,,,
|
||||
L_shoulder_roll_Link,-0.0062021439547804,-8.09177141811168E-05,-0.152256769845322,0,0,0,0.864175809181228,0.00195283329320439,-1.30750259456096E-06,-0.000421592700139823,0.00205270642376877,-5.5188950492335E-06,0.000632536575702939,0,0,0,0,0,0,package://FHrobot/meshes/L_shoulder_roll_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_shoulder_roll_Link.STL,,Trunk assembly-1/Arm_L-2/joint-3-2,shoulder_left,A_shoulder_left_roll,L_shoulder_roll_Joint,revolute,0,0,0,0,0,0,L_shoulder_pitch_Link,1,0,0,1,1,-0.2,3,,,,,,,,
|
||||
L_elbow_roll_Link,0.0102680959016839,0.000139292331516383,-0.0595928244477799,0,0,0,0.289644182069639,0.000614180896768601,-3.37389009655357E-07,-0.000144595301092032,0.000637359802224361,-1.95550596521708E-06,0.000156488481697971,0,0,0,0,0,0,package://FHrobot/meshes/L_elbow_roll_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_elbow_roll_Link.STL,,Trunk assembly-1/Arm_L-2/joint-4-1,left_elbow,A_elbow_left,L_elbow_roll_Joint,revolute,0,0,-0.256,0,0,0,L_shoulder_roll_Link,1,0,0,1,1,0,2,,,,,,,,
|
||||
L_wrist_1_Link,-0.0180422061601805,-0.000236638821166035,0.0215381575964104,0,0,0,0.239430986559928,0.000262739581455783,-3.06827371041571E-07,-4.42369625793131E-05,0.000285945708820184,-5.821034275449E-07,0.000119899041082443,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_1_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_1_Link.STL,,Trunk assembly-1/Arm_L-2/joint-5-1,wrist_left,A_wrist_left_1_3,L_wrist_1_Joint,revolute,-0.00029999,0,-0.21,0,0,0,L_elbow_roll_Link,0,0,1,2,2,-3,3,,,,,,,,
|
||||
L_wrist_2_Link,0.0070674208500859,9.70119109627299E-05,-0.0593817425638522,0,0,0,0.21879531956553,0.000334474408036445,-2.27816759278335E-07,-7.82334927589335E-05,0.000350527926104103,-1.05800782775659E-06,0.000104925380792473,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_2_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_2_Link.STL,,Trunk assembly-1/Arm_L-2/joint-6-2,wrist_left,A_wrist_left_2,L_wrist_2_Joint,revolute,0,0,0,0,0,0,L_wrist_1_Link,1,0,0,2,2,-2,2,,,,,,,,
|
||||
L_wrist_3_Link,0.000378959091884292,0.000451047606387089,-0.131295298080445,0,0,0,0.0650358728752379,2.01382715519073E-05,1.42877380732321E-06,2.68278666778984E-08,1.91136119310351E-05,6.18142307145682E-09,3.18996925604466E-05,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_3_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/L_wrist_3_Link.STL,,Trunk assembly-1/Arm_L-2/wrist-2,wrist_left,A_wrist_left_1_3,L_wrist_3_Joint,revolute,0,0,0,0,0,0,L_wrist_2_Link,0,0,1,2,2,-3,3,,,,,,,,
|
||||
R_shoulder_pitch_Link,0.0207673360733451,0.0254609943043803,0.000580250673959615,0,0,0,0.593542307736012,0.00117607485792848,0.000219194688215038,-2.3910699375265E-06,0.000566969566649508,5.76726688971481E-06,0.00126605196342283,0,0,0,0,0,0,package://FHrobot/meshes/R_shoulder_pitch_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_shoulder_pitch_Link.STL,,Trunk assembly-1/Arm_L-1/joint2-1,shoulder_right,A_shoulder_pitch,R_shoulder_pitch_Joint,revolute,0.0034455,-0.26248,0.401,0,0,0,body_Link,0,1,0,1,1,-3,3,,,,,,,,
|
||||
R_shoulder_roll_Link,0.0103836315369401,0.000135647182259224,-0.152028549720577,0,0,0,0.864177346859373,0.00192867771848554,-1.62474871609433E-06,0.000457222630756096,0.00205270334302069,5.98928658461075E-06,0.000656700561286043,0,0,0,0,0,0,package://FHrobot/meshes/R_shoulder_roll_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_shoulder_roll_Link.STL,,Trunk assembly-1/Arm_L-1/joint-3-2,shoulder_right,A_shoulder_right_roll,R_shoulder_roll_Joint,revolute,0,0,0,0,0,0,R_shoulder_pitch_Link,1,0,0,1,1,-3,0.2,,,,,,,,
|
||||
R_elbow_roll_Link,-0.00862667285124911,-0.000117804125208454,-0.0598525042258893,0,0,0,0.289644350754759,0.000605892028531419,-4.48122036072133E-07,0.000156949232601231,0.000637358757546312,2.11758875883979E-06,0.000164779464477182,0,0,0,0,0,0,package://FHrobot/meshes/R_elbow_roll_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_elbow_roll_Link.STL,,Trunk assembly-1/Arm_L-1/joint-4-1,right_elbow,A_elbow_right,R_elbow_roll_Joint,revolute,0.0070347,9.2351E-05,-0.2559,0,0,0,R_shoulder_roll_Link,1,0,0,1,1,0,2,,,,,,,,
|
||||
R_wrist_1_Link,0.0174435421265954,0.000228783022665435,0.0220258933788299,0,0,0,0.239430945334683,0.000260201423857616,-3.40174050324356E-07,4.80936773746225E-05,0.000285945235834406,6.32665015737911E-07,0.000122437626511512,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_1_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_1_Link.STL,,Trunk assembly-1/Arm_L-1/joint-5-1,wrist_right,A_wrist_right_1_3,R_wrist_1_Joint,revolute,0.0060705,7.9677E-05,-0.20991,0,0,0,R_elbow_roll_Link,0,0,1,2,2,-3,3,,,,,,,,
|
||||
R_wrist_2_Link,-0.00543298177199459,-7.55804740497079E-05,-0.0595535860214005,0,0,0,0.218795021719975,0.000330002867023569,-2.8748285229527E-07,8.44205877249315E-05,0.000350526882459532,1.1389568428902E-06,0.000109397365712253,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_2_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_2_Link.STL,,Trunk assembly-1/Arm_L-1/joint-6-2,wrist_right,A_wrist_right_2,R_wrist_2_Joint,revolute,0,0,0,0,0,0,R_wrist_1_Link,1,0,0,2,2,-2,2,,,,,,,,
|
||||
R_wrist_3_Link,0.0032290763256793,-0.000403679491448261,-0.131256285745038,0,0,0,0.0650358973951426,2.01486132163115E-05,1.4285368534047E-06,-3.49343407136786E-07,1.91136072171782E-05,2.84757151416979E-08,3.18893633518149E-05,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_3_Link.STL,1,1,1,1,0,0,0,0,0,0,package://FHrobot/meshes/R_wrist_3_Link.STL,,Trunk assembly-1/Arm_L-1/wrist-2,wrist_right,A_wrist_right_1_3,R_wrist_3_Joint,revolute,0,0,0,0,0,0,R_wrist_2_Link,0,0,1,2,2,-3,3,,,,,,,,
|
||||
|
1594
resources/robots/FHrobot/urdf/FHrobot.urdf
Normal file
1594
resources/robots/FHrobot/urdf/FHrobot.urdf
Normal file
File diff suppressed because it is too large
Load Diff
350
resources/robots/FHrobot/urdf/FHrobot.xml
Normal file
350
resources/robots/FHrobot/urdf/FHrobot.xml
Normal file
@@ -0,0 +1,350 @@
|
||||
<mujoco model="FHrobot">
|
||||
<default>
|
||||
<default class="robot">
|
||||
<default class="motor">
|
||||
<joint />
|
||||
<motor />
|
||||
</default>
|
||||
<default class="visual">
|
||||
<geom material="visualgeom" contype="0" conaffinity="0" group="2" />
|
||||
</default>
|
||||
<default class="collision">
|
||||
<geom material="collision_material" condim="3" contype="0" conaffinity="1" group="1" solref="0.005 1" friction="1 0.01 0.01" />
|
||||
<equality solimp="0.99 0.999 1e-05" solref="0.005 1" />
|
||||
</default>
|
||||
</default>
|
||||
</default>
|
||||
|
||||
<compiler angle="radian" />
|
||||
<option timestep="0.001" gravity="0 0 -9.81" wind="0 0 0" integrator="RK4"
|
||||
density="1.625"
|
||||
viscosity="1.8e-5" />
|
||||
|
||||
<visual>
|
||||
<global realtime="1" />
|
||||
<quality shadowsize="2048" numslices="28" offsamples="4" />
|
||||
<headlight diffuse="1 1 1" specular="0.5 0.5 0.5" active="1" />
|
||||
<rgba fog="1 0 0 1" haze="1 1 1 1" />
|
||||
</visual>
|
||||
|
||||
<asset>
|
||||
<texture name="plane" type="2d" builtin="checker" rgb1=".1 .1 .1" rgb2=".9 .9 .9"
|
||||
width="512" height="512" mark="cross" markrgb=".8 .8 .8" />
|
||||
|
||||
<!-- 添加斜坡纹理 -->
|
||||
<texture name="ramp_texture" type="2d" builtin="checker" rgb1="0.4 0.4 0.4"
|
||||
rgb2="0.6 0.6 0.6" width="512" height="512" mark="edge" markrgb="0.8 0.8 0.8" />
|
||||
<material name="ramp_material" texture="ramp_texture" texrepeat="2 2" reflectance="0.2" />
|
||||
|
||||
<material name="plane" reflectance="0.3" texture="plane" texrepeat="1 1" texuniform="true" />
|
||||
<material name="metal" rgba="1 1 1 1" />
|
||||
<material name="rubber" rgba="0.5 0.5 0.5 1" />
|
||||
<material name="default_material" rgba="0.7 0.7 0.7 1" />
|
||||
<material name="collision_material" rgba="1.0 0.28 0.1 0.9" />
|
||||
<mesh name="base_link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/base_link.STL" />
|
||||
<mesh name="LF_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/LF_Link.STL" />
|
||||
<mesh name="L_hip_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_hip_pitch_Link.STL" />
|
||||
<mesh name="L_knee_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_knee_pitch_Link.STL" />
|
||||
<mesh name="L_front_wheel_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_front_wheel_Link.STL" />
|
||||
<mesh name="RF_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/RF_Link.STL" />
|
||||
<mesh name="R_hip_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_hip_pitch_Link.STL" />
|
||||
<mesh name="R_knee_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_knee_pitch_Link.STL" />
|
||||
<mesh name="R_front_wheel_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_front_wheel_Link.STL" />
|
||||
<mesh name="LR_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/LR_Link.STL" />
|
||||
<mesh name="L_knee_behind_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_knee_behind_Link.STL" />
|
||||
<mesh name="L_behind_wheel_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_behind_wheel_Link.STL" />
|
||||
<mesh name="RR_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/RR_Link.STL" />
|
||||
<mesh name="R_knee_behind_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_knee_behind_Link.STL" />
|
||||
<mesh name="R_behind_wheel_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_behind_wheel_Link.STL" />
|
||||
<mesh name="body_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/body_Link.STL" />
|
||||
<mesh name="L_shoulder_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_shoulder_pitch_Link.STL" />
|
||||
<mesh name="L_shoulder_roll_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_shoulder_roll_Link.STL" />
|
||||
<mesh name="L_elbow_roll_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_elbow_roll_Link.STL" />
|
||||
<mesh name="L_wrist_1_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_wrist_1_Link.STL" />
|
||||
<mesh name="L_wrist_2_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_wrist_2_Link.STL" />
|
||||
<mesh name="L_wrist_3_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/L_wrist_3_Link.STL" />
|
||||
<mesh name="R_shoulder_pitch_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_shoulder_pitch_Link.STL" />
|
||||
<mesh name="R_shoulder_roll_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_shoulder_roll_Link.STL" />
|
||||
<mesh name="R_elbow_roll_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_elbow_roll_Link.STL" />
|
||||
<mesh name="R_wrist_1_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_wrist_1_Link.STL" />
|
||||
<mesh name="R_wrist_2_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_wrist_2_Link.STL" />
|
||||
<mesh name="R_wrist_3_Link.STL" file="/home/mrji/mujoco_learning/FHrobot/meshes/R_wrist_3_Link.STL" />
|
||||
</asset>
|
||||
|
||||
<worldbody>
|
||||
<geom name="floor" pos="0 0 0" size="0 0 .25" type="plane" material="plane"
|
||||
condim="3" friction="1.5 0.3 0.05"
|
||||
solimp="0.9 0.95 0.001" solref="0.02 1"/>
|
||||
<light directional="true" ambient=".3 .3 .3" pos="30 30 30" dir="0 -2 -1"
|
||||
diffuse=".5 .5 .5" specular=".5 .5 .5" />
|
||||
|
||||
<!-- 添加5度斜坡 -->
|
||||
<body name="ramp1" pos="-3 0 -0.06">
|
||||
<!-- 斜坡几何体 - 10度倾斜 -->
|
||||
<geom name="ramp1" type="box" size="2 10 0.1" pos="0 0 0"
|
||||
euler="0 0.174533 0" material="ramp_material"
|
||||
condim="3" friction="1.2 0.005 0.0001"
|
||||
solimp="0.9 0.95 0.001" solref="0.02 1"/>
|
||||
</body>
|
||||
<body name="ramp2" pos="-7 0 0.1">
|
||||
<!-- 平面几何体 - 0度倾斜 -->
|
||||
<geom name="ramp2" type="box" size="2 10 0.1" pos="0 0 0.2"
|
||||
euler="0 0 0" material="ramp_material"
|
||||
condim="3" friction="1.2 0.005 0.0001"
|
||||
solimp="0.9 0.95 0.001" solref="0.02 1"/>
|
||||
</body>
|
||||
<body name="ramp3" pos="-11 0 -0.06">
|
||||
<!-- 斜坡几何体 - 10度倾斜 -->
|
||||
<geom name="ramp3" type="box" size="2 10 0.1" pos="0 0 0"
|
||||
euler="0 -0.174533 0" material="ramp_material"
|
||||
condim="3" friction="1.2 0.005 0.0001"
|
||||
solimp="0.9 0.95 0.001" solref="0.02 1"/>
|
||||
</body>
|
||||
|
||||
<body name="base_link" pos="20 0 0.7" quat="1 0 0 0" childclass="robot">
|
||||
<freejoint name="floating_base" />
|
||||
<geom name="base_link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="base_link.STL" class="collision" />
|
||||
<geom name="base_link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="base_link.STL" class="visual" />
|
||||
<body name="LF_Link" pos="0.1314 0.115 0.095" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="LF_Joint" type="hinge" ref="-0.05" class="motor" range="-0.3 0.3" axis="1 0 0" />
|
||||
<inertial pos="-0.0443902011079814 0.0141011862798264 -0.0323586384511667" quat="1.0 0.0 0.0 0.0" mass="1.97567139051559" diaginertia="0.004735 0.011217 0.008935" />
|
||||
<geom name="LF_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="LF_Link.STL" class="collision" />
|
||||
<geom name="LF_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="LF_Link.STL" class="visual" />
|
||||
<body name="L_hip_pitch_Link" pos="0 0.1558 -0.041275" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_hip_pitch_Joint" type="hinge" ref="0.0" class="motor" range="-3 0.5" axis="0 1 0" />
|
||||
<inertial pos="0.0597259187631695 -0.0332703104849395 -0.124428919154413" quat="1.0 0.0 0.0 0.0" mass="1.12081146225495" diaginertia="0.008051 0.002739 0.009826" />
|
||||
<geom name="L_hip_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_hip_pitch_Link.STL" class="collision" />
|
||||
<geom name="L_hip_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_hip_pitch_Link.STL" class="visual" />
|
||||
<body name="L_knee_pitch_Link" pos="0.1384 -0.02327 -0.28853" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_knee_pitch_Joint" type="hinge" ref="0.0" class="motor" range="0 2" axis="0 1 0" />
|
||||
<inertial pos="0.0545948249645601 -0.0100000000000001 -0.113818949186922" quat="1.0 0.0 0.0 0.0" mass="0.475285576963195" diaginertia="0.004397 0.00114 0.005482" />
|
||||
<geom name="L_knee_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_knee_pitch_Link.STL" class="collision" />
|
||||
<geom name="L_knee_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_knee_pitch_Link.STL" class="visual" />
|
||||
<body name="L_front_wheel_Link" pos="0.1384 -0.006 -0.28853" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_front_wheel_Joint" type="hinge" ref="0.0" class="motor" axis="0 1 0" />
|
||||
<inertial pos="0 0.0259771265686115 3.33066907387547E-16" quat="1.0 0.0 0.0 0.0" mass="1.31612615806478" diaginertia="0.003559 0.003559 0.006764" />
|
||||
<geom name="L_front_wheel_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_front_wheel_Link.STL" class="collision" />
|
||||
<geom name="L_front_wheel_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="rubber" type="mesh" mesh="L_front_wheel_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<body name="RF_Link" pos="0.1314 -0.115 0.095" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="RF_Joint" type="hinge" ref="0.05" class="motor" range="-0.3 0.3" axis="1 0 0" />
|
||||
<inertial pos="-0.0443902011079816 -0.0141011862798257 -0.0323586384511667" quat="1.0 0.0 0.0 0.0" mass="1.97567139051559" diaginertia="0.004735 0.011217 0.008935" />
|
||||
<geom name="RF_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="RF_Link.STL" class="collision" />
|
||||
<geom name="RF_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="RF_Link.STL" class="visual" />
|
||||
<body name="R_hip_pitch_Link" pos="0 -0.086175 -0.041275" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_hip_pitch_Joint" type="hinge" ref="0.0" class="motor" range="0 1" axis="0 1 0" />
|
||||
<inertial pos="0.0596578324320381 -0.0363499999999999 -0.124461577699466" quat="1.0 0.0 0.0 0.0" mass="1.12081146225495" diaginertia="0.008048 0.002742 0.009826" />
|
||||
<geom name="R_hip_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_hip_pitch_Link.STL" class="collision" />
|
||||
<geom name="R_hip_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_hip_pitch_Link.STL" class="visual" />
|
||||
<body name="R_knee_pitch_Link" pos="0.1384 -0.04635 -0.28853" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_knee_pitch_Joint" type="hinge" ref="0.0" class="motor" range="0 0.2" axis="0 1 0" />
|
||||
<inertial pos="0.0545948249645604 0.00999999999999993 -0.113818949186922" quat="1.0 0.0 0.0 0.0" mass="0.475285576963195" diaginertia="0.004397 0.00114 0.005482" />
|
||||
<geom name="R_knee_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_knee_pitch_Link.STL" class="collision" />
|
||||
<geom name="R_knee_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_knee_pitch_Link.STL" class="visual" />
|
||||
<body name="R_front_wheel_Link" pos="0.1384 0.006 -0.28853" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_front_wheel_Joint" type="hinge" ref="0.0" class="motor" axis="0 1 0" />
|
||||
<inertial pos="1.66533453693773E-16 -0.0260228734313837 1.11022302462516E-16" quat="1.0 0.0 0.0 0.0" mass="1.31612615806478" diaginertia="0.003559 0.003559 0.006764" />
|
||||
<geom name="R_front_wheel_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_front_wheel_Link.STL" class="collision" />
|
||||
<geom name="R_front_wheel_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="rubber" type="mesh" mesh="R_front_wheel_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<body name="LR_Link" pos="-0.145 0.17761 0.07" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="LR_Joint" type="hinge" ref="0.0" class="motor" range="0 1" axis="0 1 0" />
|
||||
<inertial pos="-0.0481926144051435 0.0229637362457248 -0.132408119862618" quat="1.0 0.0 0.0 0.0" mass="1.17131532161649" diaginertia="0.013758 0.003015 0.016268" />
|
||||
<geom name="LR_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="LR_Link.STL" class="collision" />
|
||||
<geom name="LR_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="LR_Link.STL" class="visual" />
|
||||
<body name="L_knee_behind_Link" pos="-0.11845 0.0225 -0.32543" quat="0.9848073927247508 0.0 0.17365022094624097 0.0">
|
||||
<joint name="L_knee_behind_Joint" type="slide" ref="0.0" class="motor" range="0 0.3" axis="0 0 1" />
|
||||
<inertial pos="-5.55111512312578E-17 0.00818265937033372 -0.248551179372465" quat="1.0 0.0 0.0 0.0" mass="1.57098256619605" diaginertia="0.011826 0.004019 0.01403" />
|
||||
<geom name="L_knee_behind_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_knee_behind_Link.STL" class="collision" />
|
||||
<geom name="L_knee_behind_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_knee_behind_Link.STL" class="visual" />
|
||||
<body name="L_behind_wheel_Link" pos="0 0.031 -0.28669" quat="0.9848073927247508 0.0 -0.17365022094624097 0.0">
|
||||
<joint name="L_behind_wheel_Joint" type="hinge" ref="0.0" class="motor" axis="0 1 0" />
|
||||
<inertial pos="5.55111512312578E-17 0.00510352638801745 0" quat="1.0 0.0 0.0 0.0" mass="0.464948384073087" diaginertia="0.00169 0.00169 0.003247" />
|
||||
<geom name="L_behind_wheel_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_behind_wheel_Link.STL" class="collision" />
|
||||
<geom name="L_behind_wheel_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="rubber" type="mesh" mesh="L_behind_wheel_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<body name="RR_Link" pos="-0.145 -0.178 0.07" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="RR_Joint" type="hinge" ref="0.0" class="motor" range="0 2" axis="0 1 0" />
|
||||
<inertial pos="-0.0481926144051438 -0.0229637362457235 -0.132408119862618" quat="1.0 0.0 0.0 0.0" mass="1.17131532161649" diaginertia="0.013758 0.003015 0.016268" />
|
||||
<geom name="RR_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="RR_Link.STL" class="collision" />
|
||||
<geom name="RR_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="RR_Link.STL" class="visual" />
|
||||
<body name="R_knee_behind_Link" pos="-0.11845 -0.0225 -0.32543" quat="0.9848073927247508 0.0 0.17365022094624097 0.0">
|
||||
<joint name="R_knee_behind_Joint" type="slide" ref="0.0" class="motor" range="0 0.3" axis="0 0 1" />
|
||||
<inertial pos="3.88578058618805E-16 -0.00818265937033386 -0.248551179372464" quat="1.0 0.0 0.0 0.0" mass="1.57098256619605" diaginertia="0.011826 0.004019 0.01403" />
|
||||
<geom name="R_knee_behind_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_knee_behind_Link.STL" class="collision" />
|
||||
<geom name="R_knee_behind_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_knee_behind_Link.STL" class="visual" />
|
||||
<body name="R_behind_wheel_Link" pos="0 -0.031 -0.28669" quat="0.9848073927247508 0.0 -0.17365022094624097 0.0">
|
||||
<joint name="R_behind_wheel_Joint" type="hinge" ref="0.0" class="motor" axis="0 1 0" />
|
||||
<inertial pos="-1.11022302462516E-16 -0.00510352638801681 0" quat="1.0 0.0 0.0 0.0" mass="0.464948384073087" diaginertia="0.00169 0.00169 0.003247" />
|
||||
<geom name="R_behind_wheel_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_behind_wheel_Link.STL" class="collision" />
|
||||
<geom name="R_behind_wheel_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="rubber" type="mesh" mesh="R_behind_wheel_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<body name="body_Link" pos="0 0 0.322" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="body_Joint" type="hinge" ref="-0.0" class="motor" range="-1 1" axis="0 1 0" />
|
||||
<inertial pos="-0.00882910108062327 -0.000118133344383122 0.245151401628381" quat="1.0 0.0 0.0 0.0" mass="10.0801547715381" diaginertia="0.350938 0.092014 0.324866" />
|
||||
<geom name="body_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="body_Link.STL" class="collision" />
|
||||
<geom name="body_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="body_Link.STL" class="visual" />
|
||||
<body name="L_shoulder_pitch_Link" pos="-0.0034455 0.26248 0.401" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_shoulder_pitch_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 1 0" damping="0.1" />
|
||||
<inertial pos="-0.0207752848944858 -0.0254611115358671 1.62883253995716E-07" quat="1.0 0.0 0.0 0.0" mass="0.593541990378474" diaginertia="0.001176 0.001266 0.000567" />
|
||||
<geom name="L_shoulder_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_shoulder_pitch_Link.STL" class="collision" />
|
||||
<geom name="L_shoulder_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_shoulder_pitch_Link.STL" class="visual" />
|
||||
<body name="L_shoulder_roll_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_shoulder_roll_Joint" type="hinge" ref="0.0" class="motor" range="-0.2 3" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="-0.0062021439547804 -8.09177141811168E-05 -0.152256769845322" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="L_shoulder_roll_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_shoulder_roll_Link.STL" class="collision" />
|
||||
<geom name="L_shoulder_roll_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_shoulder_roll_Link.STL" class="visual" />
|
||||
<body name="L_elbow_roll_Link" pos="0 0 -0.256" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_elbow_roll_Joint" type="hinge" ref="0.0" class="motor" range="0 2" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="0.0102680959016839 0.000139292331516383 -0.0595928244477799" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="L_elbow_roll_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_elbow_roll_Link.STL" class="collision" />
|
||||
<geom name="L_elbow_roll_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_elbow_roll_Link.STL" class="visual" />
|
||||
<body name="L_wrist_1_Link" pos="-0.00029999 0 -0.21" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_wrist_1_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 0 1" damping="0.1" />
|
||||
<inertial pos="-0.0180422061601805 -0.000236638821166035 0.0215381575964104" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="L_wrist_1_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_wrist_1_Link.STL" class="collision" />
|
||||
<geom name="L_wrist_1_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_wrist_1_Link.STL" class="visual" />
|
||||
<body name="L_wrist_2_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_wrist_2_Joint" type="hinge" ref="0.0" class="motor" range="-2 2" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="0.0070674208500859 9.70119109627299E-05 -0.0593817425638522" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="L_wrist_2_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_wrist_2_Link.STL" class="collision" />
|
||||
<geom name="L_wrist_2_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_wrist_2_Link.STL" class="visual" />
|
||||
<body name="L_wrist_3_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="L_wrist_3_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 0 1" damping="0.1" />
|
||||
<inertial pos="0.000378959091884292 0.000451047606387089 -0.131295298080445" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.04907 0.001521 0.048948" />
|
||||
<geom name="L_wrist_3_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="L_wrist_3_Link.STL" class="collision" />
|
||||
<geom name="L_wrist_3_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="L_wrist_3_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<body name="R_shoulder_pitch_Link" pos="0.0034455 -0.26248 0.401" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_shoulder_pitch_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 1 0" damping="0.1" />
|
||||
<inertial pos="0.0207673360733451 0.0254609943043803 0.000580250673959615" quat="1.0 0.0 0.0 0.0" mass="0.593542307736012" diaginertia="0.001176 0.001266 0.000567" />
|
||||
<geom name="R_shoulder_pitch_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_shoulder_pitch_Link.STL" class="collision" />
|
||||
<geom name="R_shoulder_pitch_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_shoulder_pitch_Link.STL" class="visual" />
|
||||
<body name="R_shoulder_roll_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_shoulder_roll_Joint" type="hinge" ref="0.0" class="motor" range="-3 0.2" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="0.0103836315369401 0.000135647182259224 -0.152028549720577" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="R_shoulder_roll_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_shoulder_roll_Link.STL" class="collision" />
|
||||
<geom name="R_shoulder_roll_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_shoulder_roll_Link.STL" class="visual" />
|
||||
<body name="R_elbow_roll_Link" pos="0.0070347 9.2351E-05 -0.2559" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_elbow_roll_Joint" type="hinge" ref="0.0" class="motor" range="0 2" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="-0.00862667285124911 -0.000117804125208454 -0.0598525042258893" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="R_elbow_roll_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_elbow_roll_Link.STL" class="collision" />
|
||||
<geom name="R_elbow_roll_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_elbow_roll_Link.STL" class="visual" />
|
||||
<body name="R_wrist_1_Link" pos="0.0060705 7.9677E-05 -0.20991" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_wrist_1_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 0 1" damping="0.1" />
|
||||
<inertial pos="0.0174435421265954 0.000228783022665435 0.0220258933788299" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="R_wrist_1_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_wrist_1_Link.STL" class="collision" />
|
||||
<geom name="R_wrist_1_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_wrist_1_Link.STL" class="visual" />
|
||||
<body name="R_wrist_2_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_wrist_2_Joint" type="hinge" ref="0.0" class="motor" range="-2 2" axis="1 0 0" damping="0.1" />
|
||||
<inertial pos="-0.00543298177199459 -7.55804740497079E-05 -0.0595535860214005" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.048764 0.001445 0.049329" />
|
||||
<geom name="R_wrist_2_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_wrist_2_Link.STL" class="collision" />
|
||||
<geom name="R_wrist_2_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_wrist_2_Link.STL" class="visual" />
|
||||
<body name="R_wrist_3_Link" pos="0 0 0" quat="1.0 0.0 0.0 0.0">
|
||||
<joint name="R_wrist_3_Joint" type="hinge" ref="0.0" class="motor" range="-3 3" axis="0 0 1" damping="0.1" />
|
||||
<inertial pos="0.0032290763256793 -0.000403679491448261 -0.131256285745038" quat="1.0 0.0 0.0 0.0" mass="1.677087" diaginertia="0.04907 0.001521 0.048948" />
|
||||
<geom name="R_wrist_3_Link_collision" pos="0 0 0" quat="1.0 0.0 0.0 0.0" type="mesh" mesh="R_wrist_3_Link.STL" class="collision" />
|
||||
<geom name="R_wrist_3_Link_visual" pos="0 0 0" quat="1.0 0.0 0.0 0.0" material="metal" type="mesh" mesh="R_wrist_3_Link.STL" class="visual" />
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
</body>
|
||||
<site name="base_link_site" pos="0 0 0" quat="1 0 0 0" />
|
||||
<camera name="front_camera" mode="track" fovy="90.0" quat="4.329780281177467e-17 4.329780281177466e-17 0.7071067811865475 0.7071067811865476" pos="0.0 2.0 0.5" />
|
||||
<camera name="side_camera" mode="track" fovy="90.0" quat="-0.5 -0.4999999999999999 0.5 0.5000000000000001" pos="-2.0 0.0 0.5" />
|
||||
</body>
|
||||
</worldbody>
|
||||
|
||||
<actuator>
|
||||
<motor name="L_front_wheel_Joint_ctrl" joint="L_front_wheel_Joint" class="motor" />
|
||||
<motor name="R_front_wheel_Joint_ctrl" joint="R_front_wheel_Joint" class="motor" />
|
||||
<motor name="L_behind_wheel_Joint_ctrl" joint="L_behind_wheel_Joint" class="motor" />
|
||||
<motor name="R_behind_wheel_Joint_ctrl" joint="R_behind_wheel_Joint" class="motor" />
|
||||
|
||||
<position name="L_knee_pitch_pos" joint="L_knee_pitch_Joint" kp="200" kv="10"/>
|
||||
<position name="R_knee_pitch_pos" joint="R_knee_pitch_Joint" kp="200" kv="10"/>
|
||||
<position name="L_hip_pitch_pos" joint="L_hip_pitch_Joint" kp="500" kv="20"/>
|
||||
<position name="R_hip_pitch_pos" joint="R_hip_pitch_Joint" kp="500" kv="20"/>
|
||||
<position name="L_knee_behind_pos" joint="L_knee_behind_Joint" kp="500" kv="15"/>
|
||||
<position name="R_knee_behind_pos" joint="R_knee_behind_Joint" kp="500" kv="15"/>
|
||||
<position name="LR_pos" joint="LR_Joint" kp="500" kv="20"/>
|
||||
<position name="RR_pos" joint="RR_Joint" kp="500" kv="20"/>
|
||||
<position name="LF_pos" joint="LF_Joint" kp="300" kv="10"/>
|
||||
<position name="RF_pos" joint="RF_Joint" kp="300" kv="10"/>
|
||||
<position name="body_pos" joint="body_Joint" kp="300" kv="10"/>
|
||||
<position name="L_shoulder_pitch_pos" joint="L_shoulder_pitch_Joint" kp="100" kv="15"/>
|
||||
<position name="R_shoulder_pitch_pos" joint="R_shoulder_pitch_Joint" kp="100" kv="15"/>
|
||||
<position name="L_shoulder_roll_pos" joint="L_shoulder_roll_Joint" kp="100" kv="15"/>
|
||||
<position name="R_shoulder_roll_pos" joint="R_shoulder_roll_Joint" kp="100" kv="15"/>
|
||||
<position name="L_elbow_roll_pos" joint="L_elbow_roll_Joint" kp="100" kv="15"/>
|
||||
<position name="R_elbow_roll_pos" joint="R_elbow_roll_Joint" kp="100" kv="15"/>
|
||||
<position name="L_wrist_1_pos" joint="L_wrist_1_Joint" kp="100" kv="15"/>
|
||||
<position name="L_wrist_2_pos" joint="L_wrist_2_Joint" kp="100" kv="15"/>
|
||||
<position name="L_wrist_3_pos" joint="L_wrist_3_Joint" kp="100" kv="15"/>
|
||||
<position name="R_wrist_1_pos" joint="R_wrist_1_Joint" kp="100" kv="15"/>
|
||||
<position name="R_wrist_2_pos" joint="R_wrist_2_Joint" kp="100" kv="15"/>
|
||||
<position name="R_wrist_3_pos" joint="R_wrist_3_Joint" kp="100" kv="15"/>
|
||||
|
||||
<!-- <velocity name="L_front_wheel_vel" joint="L_front_wheel_Joint" kv="50" ctrlrange="-1 1" forcerange="-200 200" />
|
||||
<velocity name="R_front_wheel_vel" joint="R_front_wheel_Joint" kv="50" ctrlrange="-1 1" forcerange="-200 200" />
|
||||
<velocity name="L_behind_wheel_vel" joint="L_behind_wheel_Joint" kv="50" ctrlrange="-1 1" forcerange="-200 200" />
|
||||
<velocity name="R_behind_wheel_vel" joint="R_behind_wheel_Joint" kv="50" ctrlrange="-1 1" forcerange="-200 200" /> -->
|
||||
|
||||
</actuator>
|
||||
|
||||
<contact>
|
||||
<exclude body1="base_link" body2="LF_Link" />
|
||||
<exclude body1="LF_Link" body2="L_hip_pitch_Link" />
|
||||
<exclude body1="L_hip_pitch_Link" body2="L_knee_pitch_Link" />
|
||||
<exclude body1="L_knee_pitch_Link" body2="L_front_wheel_Link" />
|
||||
<exclude body1="base_link" body2="RF_Link" />
|
||||
<exclude body1="RF_Link" body2="R_hip_pitch_Link" />
|
||||
<exclude body1="R_hip_pitch_Link" body2="R_knee_pitch_Link" />
|
||||
<exclude body1="R_knee_pitch_Link" body2="R_front_wheel_Link" />
|
||||
<exclude body1="base_link" body2="LR_Link" />
|
||||
<exclude body1="LR_Link" body2="L_knee_behind_Link" />
|
||||
<exclude body1="L_knee_behind_Link" body2="L_behind_wheel_Link" />
|
||||
<exclude body1="base_link" body2="RR_Link" />
|
||||
<exclude body1="RR_Link" body2="R_knee_behind_Link" />
|
||||
<exclude body1="R_knee_behind_Link" body2="R_behind_wheel_Link" />
|
||||
<exclude body1="base_link" body2="body_Link" />
|
||||
<exclude body1="body_Link" body2="L_shoulder_pitch_Link" />
|
||||
<exclude body1="L_shoulder_pitch_Link" body2="L_shoulder_roll_Link" />
|
||||
<exclude body1="L_shoulder_roll_Link" body2="L_elbow_roll_Link" />
|
||||
<exclude body1="L_elbow_roll_Link" body2="L_wrist_1_Link" />
|
||||
<exclude body1="L_wrist_1_Link" body2="L_wrist_2_Link" />
|
||||
<exclude body1="L_wrist_2_Link" body2="L_wrist_3_Link" />
|
||||
<exclude body1="body_Link" body2="R_shoulder_pitch_Link" />
|
||||
<exclude body1="R_shoulder_pitch_Link" body2="R_shoulder_roll_Link" />
|
||||
<exclude body1="R_shoulder_roll_Link" body2="R_elbow_roll_Link" />
|
||||
<exclude body1="R_elbow_roll_Link" body2="R_wrist_1_Link" />
|
||||
<exclude body1="R_wrist_1_Link" body2="R_wrist_2_Link" />
|
||||
<exclude body1="R_wrist_2_Link" body2="R_wrist_3_Link" />
|
||||
</contact>
|
||||
|
||||
<sensor>
|
||||
<framepos name="base_link_site_pos" objtype="site" objname="base_link_site" />
|
||||
<framequat name="base_link_site_quat" objtype="site" objname="base_link_site" />
|
||||
<framelinvel name="base_link_site_linvel" objtype="site" objname="base_link_site" />
|
||||
<frameangvel name="base_link_site_angvel" objtype="site" objname="base_link_site" />
|
||||
<velocimeter name="base_link_site_vel" site="base_link_site" />
|
||||
</sensor>
|
||||
</mujoco>
|
||||
BIN
resources/robots/Proto/meshes/L1_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/L1_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/L2_Link.STL
Normal file
BIN
resources/robots/Proto/meshes/L2_Link.STL
Normal file
Binary file not shown.
BIN
resources/robots/Proto/meshes/LW1_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/LW1_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/LW2_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/LW2_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/R1_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/R1_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/R2_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/R2_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/RW1_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/RW1_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/RW2_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/RW2_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/SL1_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/SL1_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/SL2_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/SL2_Link.STL
Executable file
Binary file not shown.
BIN
resources/robots/Proto/meshes/SL3_Link.STL
Executable file
BIN
resources/robots/Proto/meshes/SL3_Link.STL
Executable file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user