Table of Contents
Hello everyone! Over the past year, as a Software Engineer, I've been experimenting with AI tools like ChatGPT and Copilot. In this article, I'm looking forward to sharing my experiences with these tools and their effects on my coding and problem-solving strategies. These AI assistants have gained significant attention in our field, and I've spent quite some time assessing how they fit into my daily work. If you're interested in a straightforward take on using these tools in software engineering, this article is for you.
A Large Language Model (LLM) is a sophisticated artificial intelligence (AI) system designed to understand and generate human language. It is trained by analyzing extensive collections of text data, enabling it to learn the complexities of language, including grammar, context, and everyday language. This training involves a process called deep learning, in which the model detects patterns in how words and sentences are used. As a result, it can perform tasks such as writing essays, answering questions, and, in our focus area, coding. Notable examples of LLMs include the well-known ChatGPT, an all-around generative model, and GitHub Copilot, which is more specialized and focuses specifically on generating code. These models represent a significant advancement in AI, demonstrating remarkable abilities to interpret and produce language in ways that are increasingly indistinguishable from human capabilities. Of course, this quick peek barely scratches the surface of the vast and intricate world of LLMs. In case you haven’t heard of them (or even used them) - its like you’re saying you’ve “sort of” heard the internet, because in today’s digital world, that’s pretty much impossible. LLMs in tech today, are similar as to what selfies were for social media.
I've had many discussions with friends and colleagues about a common concern that's creating buzz in the tech world. The moment we started integrating these new tools into our workflow, we all had the same striking thought: "Wow, in a few years, we won't even be needed for these kinds of jobs. These tools will take over our roles." It's a valid concern, I agree. These tools are changing the game, revolutionising every stage of the software development lifecycle.
However, there's no cause for concern. As software engineers and programmers, our roles are constantly evolving, and this is simply another step in that journey. We remain the vital link in understanding and translating client needs into functional software. The nuances of client requirements and the art of crafting effective solutions are still very much in our hands. Until the day comes when clients can perfectly articulate their needs in a way that machines understand (and we're not holding our breath on that!), our skills and insights remain indispensable.
So let's welcome these new tools with open arms and a bit of healthy skepticism. They're here to assist us, not to take over our jobs. We are still at the helm, guiding the ship of innovation. These AI advancements are just new instruments in our toolkit, potentially making our work more enjoyable and rewarding than ever before.
Now, let’s see some of my experiences with AI Tools like ChatGPT and Copilot. I’ll share my insights from my journey, exploring how these tools have reshaped my approach to coding and problem-solving.
I began utilizing ChatGPT from the first month of its release, and it significantly accelerated my task completion. Initially, I employed it for routine tasks such as:
These tasks, though commonplace and simple, were greatly streamlined. ChatGPT served as an enhanced substitute for my daily Google searches, demonstrating a keen understanding of my requests. With the right input (prompt-engineering), it provided relevant responses that seamlessly integrated into my projects, saving me the time and effort of redeveloping familiar functionalities. It also expedited my understanding of library functionalities and integration techniques, thus conserving valuable time.
As time progressed, my reliance on ChatGPT increased, extending beyond simple queries to more complex issues. For example, it assisted in evaluating the effectiveness of my architectural decisions, suggesting improvements, and automating my code deployment through the creation of Docker and shell scripts.
Recognizing ChatGPT's potential for larger-scale tasks, I started leveraging it for more ambitious projects, including:
One day, while browsing my tech news feed, I stumbled upon an article about GitHub Copilot. Intrigued, I immediately downloaded the necessary VS Code extensions to give it a whirl the next day at work.
When I resumed my tasks at the office, I was amazed at Copilot’s first suggestion—it precisely matched what I had in mind to write. Initially, I thought, "Well, it's just a simple task," but my intrigue deepened as I worked on a backend project that diverged from the typical user-oriented systems, focusing on a unique and specialized application. It was here that Copilot truly shined. Even for complex, unconventional tasks, its code suggestions were strikingly accurate, often about 90-95% correct. This discovery led me to use ChatGPT less frequently and rely more on Copilot, especially since I spent most of my time in VS Code. My efficiency soared, and tasks were completed in nearly half the usual time.
However, this newfound dependency soon showed its downside. I encountered an issue with a piece of Copilot-generated code that I couldn't debug, despite my best efforts and even turning to ChatGPT for help. That was the moment of realization: I was too reliant on these AI tools. In response, I wiped all the code Copilot had generated that day, disconnected from Copilot and WiFi, and started coding independently. By day’s end, I had written functional code from scratch. The sense of relief was immense, but I needed to understand what had gone wrong. On comparing my code with the earlier Copilot version, I realized that the error, was so minor and “well-buried”, would have taken days of debugging.
That experience was a wake-up call. I reverted to using ChatGPT merely as an alternative to Google searches and began to code more independently, reducing my reliance on language models. This went on for several months until I recognized that perhaps I had overcorrected. The real lesson was not to avoid these tools entirely, but to find a balanced, controlled way of using them.
I developed a new approach: Employ these tools for everyday tasks, and when tackling more complex challenges, provide detailed information and carefully filter their responses before integrating them into my projects. Now, I take an extra step – I thoroughly read through the AI-generated response and manually type it into my source code, ensuring each line is carefully reviewed before inclusion.
This method has proven highly effective. I haven't faced similar issues since and my productivity has remained consistently high, demonstrating the value of a balanced approach to using AI in software development.
All in all, AI tools like ChatGPT and Copilot are becoming integral parts of the software development landscape, and it’s clear that they’re not here to replace us but to enhance our capabilities. My journey with such tools has been a testament to the fact that, when used wisely, they can be powerful allies in boosting productivity and solving problems more efficiently. The key is to strike a balance between leveraging their strengths and maintaining our own critical thinking and problem-solving skills. By combining our natural problem-solving skills with AI’s rapid processing, we can extend our limits of our potential in software engineering, achieving more than we ever thought possible. A gentle reminder: stay agile and grow with these innovations, tapping into their full potential while ensuring that we, as software engineers, remain in the driver’s seat.
TIP OF THE DAY
If you’re a student, you can access GitHub Copilot (and many more) for free. More details are available on the GitHub Student Developer Pack page.