When I talk to developers and development managers about GitHub, there’s one surprisingly widely held sentiment:
“What’s left to do?”
Sure, everyone’s got something they wish that GitHub would do better. You’d be amazed how often I get asked about the availability of usage metrics for GitHub Copilot, even with the relatively new Copilot Metrics API. Since GitHub Actions brought native CI/CD to GitHub in 2018, though, it feels like GitHub’s hit the long tail of feature development: nearly everything that everyone wants is done. What’s left is the eighty percent of the features which might each only be wanted by twenty percent of the user base.
Which is a challenge, if you’re GitHub. You’re already the most popular source code repository. You’ve already got more than 100 million developers.
So, what’s next?
At this year’s GitHub Universe (October 29-30, in San Francisco), GitHub’s idea of what’s next was clear: it’s Copilot, everywhere.
GitHub’s original Copilot coding assistant was powerful, but fundamentally limited. Built on a modified version of OpenAI’s GPT-3 LLM, it had essentially two settings - “on” and “off” and provided code suggestions based on the code the user was currently working with.
This could be a powerful tool for individual developers, but enterprise leaders were asking questions. “Okay,” they would say. “This is an expensive tool, but I don’t just want my developers using it like Stack Overflow. If I’m paying for this tool, I want to be able to manage it, measure its effectiveness, and target it at the pain points that my organization as a whole is facing.”
This year at the Fort Mason Center, GitHub’s Copilot future feels laser-targeted at those critics. Some of the experiences touted by the team will probably not come to fruition – or even broad public release – for months, if not years. But the clouds are clearing on the future for Copilot in the enterprise, which seems to rest on two key pillars.
Depending on who you ask, software developers spend only a bit more than half their time coding, or nearly none of it (this Microsoft Research study cites figures from 9% to 61%). Most of the rest of the time is communicative: documentation, requirements gathering, collaboration, and discussion. Even if Copilot can help developers save time during the actual coding, there’s a lot of work that Copilot can’t directly assist with – yet.
Eventually, GitHub hopes that Copilot will be able to help not just developers write their code faster, but also accelerate the entire software development lifecycle, from ideation to operationalization.
Right now, GitHub wants to bring AI to two specific parts of the lifecycle: the ideation-to-definition phase at the beginning of a project, where developers and business owners work together to turn ideas into specific product requirements; and the review phase, where developers get a second pair of eyes on their code to catch the inevitable misspellings and other errors – ideally before they get lost and need fixing.
Copilot Workspace is GitHub’s attempt to integrate these solutions. Launched in limited technical preview in April, Workspace uses Copilot to turn ideas into plans and then into code. Now, GitHub Copilot Code Reviews lets you ask Copilot to review your code, and Copilot will, where possible, include suggested changes that can be entered.
Copilot code review also supports custom instructions. These “coding guidelines” allow organizational administrators to guide Copilot’s responses based on their organization’s best practices and allow Copilot to provide that feedback to users quickly, without needing to wait for a human reviewer. This can encourage the quicker, wider adoption of best practices throughout an organization that human reviewers might overlook or not catch consistently.
Copilot’s early years in the enterprise space were often rocky. Enterprises want control and customization, and Copilot didn’t offer a lot of knobs and dials.
Copilot’s first customized enterprise experiences were based on retrieval-augmented generation: essentially, allowing Copilot to search through your organization’s repositories and knowledge base and then use the answers to inform Copilot’s responses. This allowed Copilot to answer user questions, but didn’t change how Copilot answered those questions.
That’s now changing, in two important ways: Fine-tuned enterprise models and custom instructions.
Fine-tuned models for Copilot Enterprise entered limited public beta in September of 2024. Building new models from scratch is time-consuming and expensive, and most customers don’t have enough data to make training their own models worthwhile. GitHub is using a technique known as low-rank adaptation, which takes an existing general-purpose model and essentially builds it into a style guide: “Based on this small set of examples, here is a style guide for how your outputs should look in the future”. This doesn’t inject new data into the model, but it does align it better with specific coding patterns common in your organization.
Custom Instructions, announced at Universe, and now in public preview, is another way that organizations can tailor Copilot’s outputs. Custom Instructions allows you to create a file in each repository (.github/copilot-instructions.md) that will be added to the context of all Copilot chat requests.
For example, you might include a custom instruction that says, “We always comment each function, so when writing a function please provide a descriptive comment about the function’s purpose.” This can help guide Copilot’s responses toward your organization’s specific best practices.
Back when making use of Copilot meant flipping a switch and paying a bill, the question we were asked most often by clients was “Our developers want Copilot, but how can I prove the return on investment?”
Today, answering that question is more complex. It’s not just about how you measure ROI, but about what steps you take with Copilot to create that ROI in the first place. Whether you’re just considering Copilot or whether your developers are using it for production code, now’s the time to spend some time strategizing about how your organization can get the most out of it and optimizing workflows to do so.
On the training side, ask yourself: how comfortable are your developers with generative AI capabilities and best practices? Do they know what Copilot can do? Do they know how to use it to accelerate repetitive tasks? You’d be surprised how often developers – even those who use tools like ChatGPT – don't really have a good grasp on how to get the most out of Copilot.
On the implementation side, how do you plan to standardize and productize your use of Copilot across the team? Are there coding standards and guidelines you want Copilot to follow? Are there best practices that Copilot can help you enforce? How do you write these policies and instructions in a way that Copilot most effectively adopts them?
And on the governance side, if measuring ROI is crucial, where are you starting? What baselines have you got in place for developer experience and productivity, and what are you looking to have Copilot improve? How?
No matter where you are in your Copilot journey, feel free to reach out for a chat and an independent perspective on how you can drive ROI and security in your organization using GitHub and Copilot!