- next site
- branches
- every experiment should have a metric kpi (when it failed and when succeeded), end date
- when telling the PR that it’s an experiment:
- it makes it an a/b test
- notifications on the PR about how it’s going
- auto close when getting to the metric/kpi/time
- evolution:
- thinking of the metric and what can we do to improve on it
- gather options
- make experiments (branches)
- evaluate
- converge changes
Example:
- blank page
- increase the avg amount of time spent on the site
- PR a change to the blank page
- evaluate when KPI reached or end date
- merge or close the PR
first allow the experiments from PRs
- when PR has a certain comment “start experiment”
- the build system takes in all the “experiment branches”
- and puts the files with a prefix of the branch name and a new file with the ifs/ff control and reference to the files
- it also adds to the metrics the detail of which experiment/branch it came from
- the ab system once in a while comments on the progression of the experiments in the PR
- also it integrates into the branch comments so I can tell it to pause or change exposure
Auto experiments
- set somewhere the metrics in the code
performance.measure or performance.mark
- tell an ai that you want this metric to be improved and when to end the experiment or when it’s failed and succeeded
- the AI opens up branch/s and auto closes experiments that reach end (loop forever)