Maintaining applications that consumes an API is a difficult task, since APIs tend to change and evolve over time. Since OpenStack provides not only an API but dozens of them, manual testing is not an option. Even the Tempest tool, an official OpenStack project, only scratches the surface as it only covers a set of predefined quality gates. For an improved coverage of real-world scenarios, the Robot Framework is one option. It comes with a library of repeatedly necessary subtasks and has a DSL to easily write down custom test cases. It has a simple structure, is easy to document, and supports debugging. In this talk, we explain the architecture of the Robot Framework, share our experiences with 2000+ continuously running test and discuss advantages and challenges. Specifically, we present operational challenges to schedule the tests, the integration in reporting, and visualization with Grafana. Finally we discuss our effort to share insights with the public community, to integrate with extensions on the platform, and our lessons learned with the importance of proper resource cleanup. In a short demo we write a test case and show the big picture as an insight of our QA efforts.