The agents developed in Duckietown Learning Experiences can be evaluated against a set of benchmarks defined as a Duckietown Challenge.
Learners may evaluate their agent locally via
dts code evaluate
or on the Duckietown Challenges Server via
dts code submit
In each of these cases, a report is created against these benchmark metrics along with visualizations of the agent’s behavior in simulation. Server submissions are appended to a running leaderboard, allowing learners to compare their solutions with previous work.
You can explore previous challenge definitions and the related student submissions on the Duckietown Challenges Server.
It is recommended that the evaluation metrics for a challenge are clearly defined in the Grading section of the Learning Experience
README.md file to give learners clear performance goals for their agent.
Encouraging frequent commits:
Students should also be encouraged to bookmark their development at each submission attempt with a git commit referencing the submission number. This allows them to easily track and revert to prior attempts.