Sandboxes
Each sandbox is a virtual machine that hosts a small and isolated envionment of all Rocken services including CRM, Talent, and Rocken Jobs.
Sandboxes are used for testing and development of features and allow to load datasets and specific branches for each service.
All sandboxes are under the subdomain and are differenciated by their name.
For example: A sandbox the sandbox john.sb.rockengroup.com and has the name john and uses the common subdomain sb.rockengroup.com. Services like the CRM are reachable via subdomain crm.john.sb.rockengroup.com. The same goes for talent and Rocken Jobs (jobs).
Development Lifecycle of a feature
Let’s assume we are currently working on the feature “reset password” and it has the Jira ID RT-6000. There is a backend and a frontend component to this task.
A backend developer creates a new branch rt-6000-reset-password and starts working on the API. At the same time a frontend developer creates a similar branch rt-6000-password-reset. Notice that the branch names don’t need to match perfectly.
Eventually, frontend and backend developers are finished and hand over their code to QA.
QA checks out rt-6000-reset-password and rt-6000-password-reset on their sandbox john.sb.rockengroup.com and starts with manual testing. There might be multiple iterations where bugs are found and fixed by developers. Eventually, the feature passes the quality standards.
QA starts writing regression tests on the same backend brach rt-6000-reset-password in a specific subfolder (e.g. /qa_automation) and tests it on their sandbox. When everything is finished and passes the full regression suite the code is commited and merged to the main branch in frontend and backend.
This triggers the CI/CD and all regression tests are executed on a dedicated sandbox (e.g. sb-integration). The code gets deployed to staging after all unit and regression tests pass.
To deploy the changes to production they need to be merged to the production branch.
Working with a sandbox
Sandboxes are not persistent. You must treat them as temporary and all changes to code, filesystem or database will not be backed up. The only way to retain this information is to commit code to git, modify provisioning scripts and modify the datasets.
Ideally, sandboxes get deleted and reconstructed once a week. This ensures that they are up to date, have the correct OS and library versions and work as intended. Recreating sandboxes regularly is done to prevent the environments from drifting apart over time due to different configurations which make debugging more difficult.
Start by pulling the latest dataset and the main branches to work with a sandbox. Change the data and the branches as needed for testing or development.
It is best to either work via SSH on the sandboxes or sync your local code via scp or rsync. You can of course also use git to push and pull your changes, but this might become cumbersome during active development.
Connecting to a sandbox via SSH should be possible. All QAs and devs shoud be able to access all sandboxes. Sync the SSH keys from Gitlab to make this possible. Update the keys once a day on all sandboxes.
Using Datasets
All scrips for sandboxes need to be stored in git with the name rocken_scripts (or something similar). We can start with one repository and split them as needed. All script names below are only examples and can be changed for clarity.
-
Getting code: Fetching the code cleans the previous state and pulls the current state from git based on the branches provided. Consider using raw files versus git version. One works better with rsync while the other works with git.
-
$ scripts/use_code
-
$ scripts/use_code –crm abc123 –jobs def123
-
-
Getting dataset: We will have only very few datasets, but they will have some kind of versioning. Most of the time people will be interested in the latest version, but it should be possible to access previous versions too. This script updates all data sets including the main database, service specific databases, Redis, Elastic Search etc. The assumption is that after running this script people can start working with the data immediately.
-
$ scripts/use_dataset.sh latest
-
$ scripts/use_dataset.sh 20241120
-
-
Reset dataset: Resetting the data is the same as getting the same dataset again, but as optimized as possible to save time. Resetting the data must be done before each automatic regression test.
-
$scripts/reset_datset.sh
-
Working with / Updating Datasets
-
Lock / Block dataset via chat message, so only one QA can make new changes
-
Use one sandbox excusively (e.g. SB3)
-
$scripts/use_code.sh (by default latest)
-
$ scripts/use_dataset.sh (by default latest)
-
Make your changes or reset if you made a mistake
-
$ scripts/create_dataset 20241120

Leave a Reply
You must be logged in to post a comment.