Migrating a code base with AI and Codebuddy

Introduction

This blog post dives into using AI to migrate code from one platform to another. For instance, it could involve transitioning a project from one library to another.

File Size Considerations

One of the essential elements to consider is maintaining small file sizes in both the source and destination code. This not only facilitates human maintenance and adheres to good coding practices but is also crucial for AI comprehension. Smaller files enable the AI to digest code more efficiently and limit the amount of code it needs to process per change. Despite this, modern AIs have impressive capabilities in ingesting large amounts of code, with context sizes reaching up to 128,000 tokens for models like those developed by OpenAI, and even exceeding a million tokens for larger language models. Despite this, the new files should aim to be not much more than 300 lines long whenever possible.

Planning and Execution

To achieve successful migration, it's advisable to engage in a dialogue with the AI about the desired outcomes across multiple exchanges before commencing code generation. By doing so, the AI can formulate a comprehensive action plan that includes essential implementation elements. This plan then serves as a checklist, significantly enhancing the AI's ability to execute the code generation effectively.

Codebuddy in particular is designed with this concept in mind, as it always collaborates on a plan before initiating any code writing.

Directing AI's Attention

During the planning process, it can be beneficial to direct the AI's attention to the aspects it should consider when generating the plan. For example, specifying that it should analyze the disparities between the properties or attributes of the old component and the new one, and to compile a list of the changes required for the migration process. Despite this guidance, it's likely that the AI won't adhere to the plan as closely as desired due to the complexity and intellectual depth required for such a task. The current state of AI is unable to seamlessly handle such intricate operations, necessitating some level of guidance and intervention. Consider it more as providing a solid initial attempt, which may require subsequent adjustments and refinements based on further interactions and discussions.

Unit Tests and Integration Tests

One particularly helpful practice during the migration process is the creation of unit tests or integration tests for the new components. These tests serve multiple purposes: they independently verify the functionality of the new components, ensuring they operate correctly within the overall system, and they help reinforce the requirements in the AI's understanding. For instance, when dealing with React components, setting up Storybook or utilizing React testing frameworks can be advantageous. By incorporating these testing methodologies, you not only validate the functionality of the migrated components but also provide additional clarity and assurance to the AI during the code generation process.

AI Hallucinations and Multiple Models

It's worth noting that during the code generation process, the AI may attempt to implement the new component using structures and concepts that are not native to the target framework but are prevalent in other popular frameworks. This tendency, known as hallucination, is common in AI and underscores the importance of thorough planning and discussion before code writing. The more detailed and specific the conversations about the intended changes, the higher the quality of the resulting code is likely to be. Therefore, engaging in comprehensive dialogue and providing clear instructions can help mitigate the occurrence of such hallucinations and ensure a smoother migration process overall.

Furthermore, considering the utilization of multiple AI models can enhance the quality of code generation. While models like GPT-4 or Anthropic Opus may perform equally in many scenarios, certain tasks or operations may be better suited to one model over another. Just as in real-life collaboration, multiple perspectives can lead to breakthroughs, and incorporating different AI models may provide fresh insights and approaches to the migration process. As they say, two heads are better than one - the same may be true for addressing complex tasks with AI.

Experimenting with Prompts

Experiment with different prompts during the code generation process to achieve better outcomes. Sometimes, submitting the same prompt or retrying a particular chat interaction may yield wildly different results, especially when there are numerous elements for the AI to consider. Sometimes simply retrying is enough but failing this, it's worth exploring various prompts to refine the plan or enhance code generation. Adjusting the prompt to be more specific or directing the AI's attention to specific aspects it may have overlooked can lead to improved results beyond simply retrying.

You might also like...