We compared writing tests for Go and JavaScript using CodiumAI and GitHub Copilot ,
Both utilities perform poorly if there are no examples.
Different respondents report different amounts of tests needed to be written by themselves to give AI a "push". Someone says that you need to write 3-4 high-quality unit tests. someone says that they first develop a test framework, then write one test, and the rest are generated by autocompletion GitHub Copilot
In CodiumAI, writing tests is implemented as a separate feature and takes place in two stages - the generation of a test suite, and the subsequent generation of each test with a separate request to LLM. Whereas, GitHub Copilot generates tests with one request (in chat), or one test at a time in autocomplete mode. This significantly improves the output and generates a large number of tests at once.
CodiumAI also tries to be creative and invents tests for functionality that the function does not have, but what in its opinion may be useful. This is a plus.
At CodiumAI, there is a cool feature that allows you to run tests and automatically fix them directly from their plugin. But there is one limitation, if the test uses global variables for the test suite, it does not use them in the context of the test execution.
An option for the truly lazy. You can use GitHub Copilot to write the first test, it most likely won't work, then refactor it to the condition you like and feed it to CodiumAI as an example to generate all the other tests.