AI in Software Engineering: Can I become the Copilot?

Is an AI-powered sidekick the best way to deal with the complexities of software engineering and programming? Can it become our trusted guide, or will it steer us astray? Should we be worried about the reliance we might have on such tools? Well, I believe I've gathered some useful information and tips over the last year while these tools were part of my workflow.

Table of Contents

Introduction

Hello everyone! Over the past year, as a Software Engineer, I've been experimenting with AI tools like ChatGPT and Copilot. In this article, I'm looking forward to sharing my experiences with these tools and their effects on my coding and problem-solving strategies. These AI assistants have gained significant attention in our field, and I've spent quite some time assessing how they fit into my daily work. If you're interested in a straightforward take on using these tools in software engineering, this article is for you.

What is a Large Language Model (LLM)

A Large Language Model (LLM) is a sophisticated artificial intelligence (AI) system designed to understand and generate human language. It is trained by analyzing extensive collections of text data, enabling it to learn the complexities of language, including grammar, context, and everyday language. This training involves a process called deep learning, in which the model detects patterns in how words and sentences are used. As a result, it can perform tasks such as writing essays, answering questions, and, in our focus area, coding. Notable examples of LLMs include the well-known ChatGPT, an all-around generative model, and GitHub Copilot, which is more specialized and focuses specifically on generating code. These models represent a significant advancement in AI, demonstrating remarkable abilities to interpret and produce language in ways that are increasingly indistinguishable from human capabilities. Of course, this quick peek barely scratches the surface of the vast and intricate world of LLMs. In case you haven’t heard of them (or even used them) - its like you’re saying you’ve “sort of” heard the internet, because in today’s digital world, that’s pretty much impossible. LLMs in tech today, are similar as to what selfies were for social media.

Aren’t LLMs the End of Programmers and Software Engineers?

I've had many discussions with friends and colleagues about a common concern that's creating buzz in the tech world. The moment we started integrating these new tools into our workflow, we all had the same striking thought: "Wow, in a few years, we won't even be needed for these kinds of jobs. These tools will take over our roles." It's a valid concern, I agree. These tools are changing the game, revolutionising every stage of the software development lifecycle.

However, there's no cause for concern. As software engineers and programmers, our roles are constantly evolving, and this is simply another step in that journey. We remain the vital link in understanding and translating client needs into functional software. The nuances of client requirements and the art of crafting effective solutions are still very much in our hands. Until the day comes when clients can perfectly articulate their needs in a way that machines understand (and we're not holding our breath on that!), our skills and insights remain indispensable.

So let's welcome these new tools with open arms and a bit of healthy skepticism. They're here to assist us, not to take over our jobs. We are still at the helm, guiding the ship of innovation. These AI advancements are just new instruments in our toolkit, potentially making our work more enjoyable and rewarding than ever before.

My experience with such tools

Now, let’s see some of my experiences with AI Tools like ChatGPT and Copilot. I’ll share my insights from my journey, exploring how these tools have reshaped my approach to coding and problem-solving.

ChatGPT

I began utilizing ChatGPT from the first month of its release, and it significantly accelerated my task completion. Initially, I employed it for routine tasks such as:

  1. Aligning a div element in a webpage.
  2. Manipulating data in an Excel file.
  3. Parsing text, CSV, or JSON files.
  4. Recommending libraries tailored to my specific needs.
  5. Assisting with the integration of a library into my project.
  6. Writing unit and functional tests for my code.
  7. Generate comments for my code.

These tasks, though commonplace and simple, were greatly streamlined. ChatGPT served as an enhanced substitute for my daily Google searches, demonstrating a keen understanding of my requests. With the right input (prompt-engineering), it provided relevant responses that seamlessly integrated into my projects, saving me the time and effort of redeveloping familiar functionalities. It also expedited my understanding of library functionalities and integration techniques, thus conserving valuable time.

As time progressed, my reliance on ChatGPT increased, extending beyond simple queries to more complex issues. For example, it assisted in evaluating the effectiveness of my architectural decisions, suggesting improvements, and automating my code deployment through the creation of Docker and shell scripts.

Recognizing ChatGPT's potential for larger-scale tasks, I started leveraging it for more ambitious projects, including:

  1. Generating endpoints that receive parameters, execute queries, fetch results, and apply algorithms to the data.
  2. Constructing algorithms for mathematical problems.
  3. Analyzing my written code to provide insights: assessing optimization, suggesting better file organization for easier maintenance, identifying uncovered cases, and evaluating my coding practices.
  4. Assisting in debugging and troubleshooting code.

GitHub Copilot

One day, while browsing my tech news feed, I stumbled upon an article about GitHub Copilot. Intrigued, I immediately downloaded the necessary VS Code extensions to give it a whirl the next day at work.

When I resumed my tasks at the office, I was amazed at Copilot’s first suggestion—it precisely matched what I had in mind to write. Initially, I thought, "Well, it's just a simple task," but my intrigue deepened as I worked on a backend project that diverged from the typical user-oriented systems, focusing on a unique and specialized application. It was here that Copilot truly shined. Even for complex, unconventional tasks, its code suggestions were strikingly accurate, often about 90-95% correct. This discovery led me to use ChatGPT less frequently and rely more on Copilot, especially since I spent most of my time in VS Code. My efficiency soared, and tasks were completed in nearly half the usual time.

However, this newfound dependency soon showed its downside. I encountered an issue with a piece of Copilot-generated code that I couldn't debug, despite my best efforts and even turning to ChatGPT for help. That was the moment of realization: I was too reliant on these AI tools. In response, I wiped all the code Copilot had generated that day, disconnected from Copilot and WiFi, and started coding independently. By day’s end, I had written functional code from scratch. The sense of relief was immense, but I needed to understand what had gone wrong. On comparing my code with the earlier Copilot version, I realized that the error, was so minor and “well-buried”, would have taken days of debugging.

The aftermath

That experience was a wake-up call. I reverted to using ChatGPT merely as an alternative to Google searches and began to code more independently, reducing my reliance on language models. This went on for several months until I recognized that perhaps I had overcorrected. The real lesson was not to avoid these tools entirely, but to find a balanced, controlled way of using them.

I developed a new approach: Employ these tools for everyday tasks, and when tackling more complex challenges, provide detailed information and carefully filter their responses before integrating them into my projects. Now, I take an extra step – I thoroughly read through the AI-generated response and manually type it into my source code, ensuring each line is carefully reviewed before inclusion.

This method has proven highly effective. I haven't faced similar issues since and my productivity has remained consistently high, demonstrating the value of a balanced approach to using AI in software development.

Suggestions

  1. Master the Tools, Don't Be Mastered: Prioritize learning how to effectively utilize AI tools while maintaining control. This means being proactive and discerning about when and how to use these tools, ensuring you remain in the driver's seat of your projects.
  2. Engage in Critical Analysis and Customization: Routinely assess and tailor the solutions offered by AI tools. It's essential to adapt their output to your specific project needs, taking into account factors such as performance, scalability, and maintainability to ensure optimal integration.
  3. Practice ‘AI Pair Programming’: Treat the tool as a pair programmer. Present your problem, consider the AI's solution, discuss it (even if just with yourself), and then refine it together.
  4. Utilize for Learning and Adaptation: Use AI tools as a means to enhance your learning. Let them assist in code breakdown, library discovery, and integration suggestions. Moreover, deepen your understanding by exploring the logic behind their solutions and experimenting with different approaches.
  5. Develop Effective Prompting Skills: Focus on improving your prompt-engineering skills. This involves learning how to formulate questions and requests to maximize the effectiveness and relevance of the responses from AI tools. Keep an eye out for a forthcoming blog detailing personal experiences and insights on this skill.
  6. Establish Usage Boundaries: Set clear guidelines for what tasks are appropriate for AI assistance and which should be tackled independently. This approach helps to prevent over-reliance on these tools, ensuring you continue to develop and challenge your own problem-solving skills.
  7. Stay Informed on AI Progress: Keep abreast of the latest developments in AI technology. Understanding the evolving capabilities and limitations of these tools enables you to use them more effectively and adapt to the changing technological landscape.

Conclusion

All in all, AI tools like ChatGPT and Copilot are becoming integral parts of the software development landscape, and it’s clear that they’re not here to replace us but to enhance our capabilities. My journey with such tools has been a testament to the fact that, when used wisely, they can be powerful allies in boosting productivity and solving problems more efficiently. The key is to strike a balance between leveraging their strengths and maintaining our own critical thinking and problem-solving skills. By combining our natural problem-solving skills with AI’s rapid processing, we can extend our limits of our potential in software engineering, achieving more than we ever thought possible. A gentle reminder: stay agile and grow with these innovations, tapping into their full potential while ensuring that we, as software engineers, remain in the driver’s seat.


TIP OF THE DAY

If you’re a student, you can access GitHub Copilot (and many more) for free. More details are available on the GitHub Student Developer Pack page.


Any thoughts? 🤔

;