Case 1: Writing code annotations

Our main aim was to create documentation for all the classes, protocols, and functions in the KYC flow of an IOS banking application. The original code was quite complex and a bit of a puzzle, lacking any documentation.

Used GitHub Copilot Chat and Cursor Chat.

Lessons learned

Break the task into smaller segments

Prompts like “write documentation for this 200 lines long file” usually don't yield optimal results. It's far more effective to ask the chat to describe classes incrementally. Given that the chat can only handle limited context, it's more efficient for it to gather information about specific functions rather than an entire file.

Additionally, GPT tends to generate more meaningful annotations for a class when there are existing comments in its code.

Control the context of the chat

Both Copilot Chat (since 0.12.1 version) and Cursor Chat allow adding specific files to the chat context. Copilot has # for this purpose (e.g., #file:MyFile.swift), while Cursor employs the "@" symbol (@MyFile.swift). Cursor Chat provides more flexibility in terms of what can be added to the context, allowing inclusion of specific code snippets from various files or even entire folders. However, these tags essentially function in a similar manner.

What’s more important, simply adding a file to the project doesn't guarantee that the GPT model will utilize that specific file when generating a response. At times, Copilot Chat may disregard the currently opened tab, even when explicitly mentioned in the prompt (potentially a bug).

To include the entire project in the prompt, the tag @workspace (Copilot) and “with codebase” option (Cursor) can be used. In this case, some of the project files will be included in the prompt automatically. It can improve the response if the related files are contextually significant. On the other hand, for local functions or classes, excluding the workspace may be preferable, as its inclusion can lead to confusion for GPT, resulting in a potentially ambiguous final output.

All in all, the context is important for satisfactory results, but it may require some time and/or multiple attempts before Copilot or Cursor precisely uses the necessary files.

Provide an example

AI assistants perform well when provided with examples. For example, we encountered a scenario where we had several similar protocols and aimed to describe them uniformly. At first we tried to generate an annotation for all of them at once, but failed, as the results varied for each protocol. However, when an example was provided and the prompt was rephrased to “complete the rest of the annotations taking X as example”, the result improved drastically.

It is important to note that the number of provided examples also matters. Providing 2-3 good examples to the chat yields a more stable outcome.

Overall, the following algorithm proves effective:

  1. Generate a result for one entity.
  2. Manually correct if needed, as similar minor issues may persist in future outputs.
  3. Generate a result for the second entity, referring to the first.
  4. Manually correct if needed.