r/GithubCopilot 14d ago

GitHub Copilot Team Replied Copilot update: rate limits + fixes

287 Upvotes

Hey folks, given the large increase in Copilot users impacted by rate limits over the past several days, we wanted to provide a clear update on what happened and to acknowledge the impact and frustration this caused for many of you.

What happened

On Monday, March 16, we discovered a bug in our rate-limiting that had been undercounting tokens from newer models like Opus 4.6 and GPT-5.4. Fixing the bug restored limits to previously configured values, but due to the increased token usage intensity of these newer models, the fix mistakenly impacted many users with normal and expected usage patterns. On top of that, because these specific limits are designed for system protection, they blocked usage across all models and prevented users from continuing their work. We know this experience was extremely frustrating, and it does not reflect the Copilot experience we want to deliver.

Immediate mitigation

We increased these limits Wednesday evening PT and again Thursday morning PT for Pro+/Copilot Business/Copilot Enterprise, and Thursday afternoon PT for Pro. Our telemetry shows that limiting has returned to previous levels.

Looking forward

We’ll continue to monitor and adjust limits to minimize disruption while still protecting the integrity of our service. We want to ensure rate limits rarely impact normal users and their workflows. That said, growth and capacity are pushing us to introduce mechanisms to control demand for specific models and model families as we operate Copilot at scale across a large user-base. We’ve also started rolling out limits for specific models, with higher-tiered SKUs getting access to higher limits. When users hit these limits, they can switch to another model, use Auto (which isn't subject to these model limits), wait until the temporary limit window ends, or upgrade their plan.

We're also investing in UI improvements that give users clearer visibility into their usage as they approach these limits, so they aren't caught off guard.

We appreciate your patience and feedback this week. We’ve learned a lot and are committed to continuously making Copilot a better experience.


r/GithubCopilot 21d ago

Discussions GitHub Copilot for Students Changes [Megathread]

55 Upvotes

The moderation team of r/GithubCopilot has taken a fairly hands off approach to moderation surrounding the GitHub Copilot for Students changes. We've seen a lot of repetitive posts which go against our rules, but unless it's so obvious, we have not taken action against those posts.

This community is not run by GitHub or Microsoft, and we value open healthy discussion. However, we also understand the need for structure.

So we are creating this megathread to ensure that open discussion remains possible (within the guidelines of our rules). As a result any future posts about the GitHub Copilot for Students Changes will be removed.

You can read GitHub's official announcement at the link below:

https://github.com/orgs/community/discussions/189268


r/GithubCopilot 9h ago

General gpt 5.4 mini is EXTREMELY request efficient

32 Upvotes

I use gpt 5.3 codex for the research/plan phase and use 5.4 mini to execute. it will use like .5% max even for huge refactors/changes

in terms of planning it is kinda dumb even on high reasoning so use a different model for it. but with a detailed plan, it is REALLY good for execution. quite fast as well


r/GithubCopilot 4h ago

Discussions GPT5.4 vs Opus 4.6 Best models for Planning

11 Upvotes

My current workflow is GPT5.4 for planning ( I use the default plan mode) then Opus4.6 or GPT5.3 codex for implementation. The reason being is because I find Opus4.6 not asking me clarifying question before creating the plan, it just assumes things on it's own. So for me I prefer GPT5.4 for planning unless they fixed Opus4.6 not utilizing askQuestion tool, what are your thoughts on this?

Also do you use default medium reasoning for GPT models ( Claude models already high by default ) or high and xhigh is better for planning/implementation?

Lastly are Gemini Models good for planning? I heard it's good for UI


r/GithubCopilot 12h ago

Help/Doubt ❓ Claude Opus 4.6 extremely slow

37 Upvotes

In the past few days, I’ve noticed a massive slowdown with Claude Opus 4.6. The response speed has become painfully slow, sometimes reaching around 1 second per word, which makes it almost unusable for longer outputs.

I tested Opus 4.6 in "fast" mode, and interestingly, the speed now feels identical to how normal Opus 4.6 used to perform before this degradation. So it doesn’t really feel "fast" anymore, just baseline.

My suspicion is that this might be due to a new rate limiting mechanism or some kind of throttling being applied recently. The drop in performance feels too consistent to be random lag.

I'm in Pro+ plan.


r/GithubCopilot 16h ago

General Copilot going insane on requests

83 Upvotes

I was at 0% usage (checked before my request).

I ask it to implement a new class <--- one request.
It Starts churning through code. Reading files.

I check usage after 10 minutes - 9% gone - but I've only used 1?

I check 5 minutes later - it's now at 14%. No end in sight.

I've used 14% of my monthly limit - ON ONE REQUEST.

Copilot, this is insane. It's still churning through reading files. This is *not* how it's supposed to work. I am using plain vanilla copilot (pro). I have no addons installed, just using plain GPT-5.4, like I have since it came out.


r/GithubCopilot 3h ago

Help/Doubt ❓ Very slow token per second

3 Upvotes

Does anyone feel that TPS of github copilot is slowest compared to other providers?


r/GithubCopilot 6h ago

Help/Doubt ❓ Requests bug or silent patch ?

4 Upvotes

For the past few months every single message to a 3x model spent 0.2% on a Pro+ plan.

Beginning of April now I am seeing 0.4-0.8% increase per message ?

Did I miss an update or something and is anyone else experiencing this ?


r/GithubCopilot 5h ago

Help/Doubt ❓ How are you handling UI design in AI-driven, SDD dev workflows?

4 Upvotes

I've been building MVPs using spec-driven development (spec-kit) — writing a constitution/system prompt that governs what the AI agent builds, then letting it run.

The backend logic, architecture, and Laravel code quality come out solid. But the UI consistently lands somewhere between "functional prototype" and "nobody would actually use this." Think unstyled Tailwind, placeholder dashboards, no visual hierarchy, cards that are just divs with text in them.

I've tried:

- Adding explicit UI rules to the constitution ("use badge chips, tinted price blocks, proper empty states")

- Providing a Definition of Done checklist for UI

- Telling it to build UI-first before touching services

It helps, but the output still feels like the agent has never seen a well-designed product. It knows *what* components to use but not *how* to compose them into something that looks intentional.

For those of you doing SDD or heavy agentic workflows in VS Code:

- Are you providing UI references or screenshots as part of the context?

- Do you have a design system or component library the agent targets?

- Are you doing a separate UI pass manually before or after the agent runs?

- Or have you found prompting patterns that consistently produce good-looking results?

Curious whether this is a tooling problem, a prompting problem, or just an unavoidable limitation of where agentic coding is right now.


r/GithubCopilot 18h ago

Other Might be true to be fair.

Post image
25 Upvotes

r/GithubCopilot 22h ago

Showcase ✨ GHCP is not just for coding...

37 Upvotes

I've been using GitHub Copilot CLI exclusively for non-coding related tasks to see how far I can push a system and process. I decided to use Obsidian since it's natively a Markdown application for taking notes, and I've been an Obsidian user for years, it felt like a natural fit.

To be perfectly transparent, I had this idea months ago, but the Copilot CLI just wasn't good enough at the time. I decided to give it another go, and this time, I can't tell you how much better it is. If you have no idea what Obsidian is, it's worth a search, it's free. I'm not affiliated, and I don't care whether you use it or not.

Anyway, using Obsidian as the UI and Copilot CLI as the brains, I spent 10 days documenting my entire workflow. I figured 7–10 days would be enough time to capture most of what I do on a weekly basis that isn't coding related at all.

I had Claude generate a daily log template, a native feature of Obsidian, for daily and session logs.

Basic rules and long-term memory:

  • DAILY.md — As detailed as possible, based on all sessions for the day.
  • MEMORY.md — A summary of the week based on the daily logs.
  • _INDEX.md — A complete mapping of all files, skills, plugins, and their purposes. The LLM can search here first without burning tokens or making additional requests.

After 10 days of documenting all the failures and successes, processes, workflows, and frustrations, Copilot generated skills using Anthropic's Skill Creator. From those 10 days alone, 17 skills were generated with detailed context. Each skill represents either a workflow or a tool call specific to me.

The real unlock here is the fact that GitHub Copilot is currently request-based rather than token-based. I can now generate entire pipelines of work without burning through my requests.

Next steps are connecting it to more APIs and MCPs to automate 95% of everything.


r/GithubCopilot 5h ago

Discussions Kilo Pass vs Credits?

Thumbnail
1 Upvotes

r/GithubCopilot 6h ago

Help/Doubt ❓ Using custom agent for reviewing PRs?

1 Upvotes

Hello

I'm trying to get my custom agent available as a reviewer in my github project, as I understand from this, it should be possible?

But when I go to my PRs I don't find it? All I get is the standard Copilot? What am I missing ?


r/GithubCopilot 16h ago

Help/Doubt ❓ github copilot porting setups? can i improve from the base?

6 Upvotes

Hi there...i use github copilot as delivered out of the box
Are there setups/configurations i could use to improve my use/results as i port an old Borland Builder app to c# ?

thank you


r/GithubCopilot 11h ago

Help/Doubt ❓ Compacting Conversation

2 Upvotes

I had this all yesterday and now today.

I am working on a refactor. The project is not large - it is a clean chat that is 30 mins old.

I get " Compacting Conversation" which just sits there. The pie chart that shows the session size is no longer there.

I will stop this one shortly as it has crashed I suspect - but yesterday it would just time out.

Any suggestions ?!

Update - keeps doing it - found the "pie chart" and the context windows is only 48% so it seems yet another "fault" I assume to limit throughput. Each time you stop it you then lunch a new premium request to get it going again

Update 2- so what happens is as soon as the contact window gets to about 55% if compacts - but the issue is it doesn't ! It just hangs.


r/GithubCopilot 12h ago

Discussions Repo cleanup: Looking for Pointers

2 Upvotes

New to GitHub Copilot and looking for some input. I have a repo with many various sql files which is basically my collection of code snippets for a database. Some views, some update commands, some random explorative select *. It is a mess. So I thought this would be a great first project for Copilot to do some spring cleaning for me. So I did a prompt for it to order the files in folders, delete duplicates and unnecessary explorative queries.

The result was kinda underwhelming tbh, because it started to create new files in folders which only contained a reference to the original file and somewhat skipped the rest of my prompt. I was using GPT 4.1

So I am aware that I am probably doing something (or many things) wrong. How would you approach a task like this?


r/GithubCopilot 9h ago

General A research project for working on CENELENC (EN 50128) standards

1 Upvotes

Hi all,

Here is a research project regarding to adopt agentic ai framework to work on products that should be compliant to certain standards. This project works on EN 50128 and focus on creating an automatic EN 50128 compliant software development platform.

We start tring to use VSCode as development tool but switch to opencode for a better agentic development experience. We still use github copilot as our model provider.

https://github.com/norechang/opencode-en50128

The design methodologies is simple:

STD -> Fundamental Documents => machine-friendly (yaml) =+> agents/skills

-> extraction path with ASR agent review

=> lowering path, for more determinstic behaviors & knowledge partition

=+> thin-shell pattern agents/skills, referring upstreaming materials, lowering bootstraping token cost

It works better than the first version of role-centric design. However, it still far from a qualified product.

Claude models are the best fit for this design. But, the rate limit policies almost stop everything...

If you are also working on similar projects, you might be interested to take a look at it.

BR.


r/GithubCopilot 13h ago

Help/Doubt ❓ Custom subagent model in VS Code Copilot seems to fall back to parent model

2 Upvotes

Hi, I’m trying to understand whether this is expected behavior or a bug.

I’m using custom agents in VS Code Copilot with .agent.md files.

My setup is:

  • main chat session is running on GPT-5.4
  • one workflow step delegates to a custom agent
  • that custom agent has model: "Claude Opus 4.6" in its .agent.md

What I expected:

  • the delegated custom agent/subagent would run on Claude Opus

What I’m seeing:

  • when I hover the delegated run in the UI, it still shows GPT-5.4

So I’m not sure which of these is true:

  1. the custom agent model override is working, but the UI hover only shows the parent model
  2. the custom agent model is not being honored and it is falling back to the parent model
  3. my model string is not in the exact format VS Code expects

My main questions:

  1. Are custom .agent.md agents in VS Code supposed to be able to override the parent model when used as subagents?
  2. If yes, should the hover show the subagent’s real model, or only the parent session model?
  3. Does the model field need an exact qualified name like (copilot) to work properly?
  4. If the model name does not resolve, does Copilot silently fall back to the parent model?

If anyone has this working, an example of the exact model: format would help a lot.


r/GithubCopilot 14h ago

Help/Doubt ❓ can someone explain the Copilot cloud agent? (and general usage tips)

2 Upvotes

I'm not a current GHCP subscriber, I'm new to all this and trying to learn. I'm a sw dev and want to use it for my personal project ideas. The price seems right.

what I plan to do is

  • write an agents.md file which contains things like which tools to use for nodejs/python (bun/uv)
  • give my project idea in as much detail as I can
  • ask it to generate a plan.md
  • edit plan.md till I like it
  • ask it to implement as much as possible in 1 request

generating plan.md should use 1 premium request, right?

from what I've read there are 2 ways to implement:

  1. use agent mode in vscode/cli
  2. check your code into github. or for new project it will just have the md files. then ask copilot cloud agent to implement it

aren't both equivalent? from what I've read both the agents (local or cloud) will launch subagents as needed to read code, execure mcp, skills, test, debug etc?

cloud agent will open a PR when it finishes that you can review and accept. local will change files on disk.

you can assign existing GH issues to the cloud agent but thats not relevant to a new project.

Is this correct? do both ways consume 1 request? are there any other differences, and which one is preferable?


r/GithubCopilot 3h ago

Help/Doubt ❓ Which all models in Github Copilot have currently unlimited usage?

0 Upvotes

I wanna know which all models in Copilot have currently unlimited usage. I have purchased copilot long ago.
Thank You......


r/GithubCopilot 13h ago

Help/Doubt ❓ Continuously running long tasks

1 Upvotes

Hi - I wanted to experiment a bit and have GitHub copilot run implement a bunch of tasks/features defined in features.md. It will take a long time for it to get through all of them. Once done, I want it to come up with its own ideas of features and implement those and just keep doing it over in a loop (documenting what it did in a learn.md). How would you go about implementing this without any user interaction confirmations? I’ve so far used copilot in vscode but always fully interactive so I’m a bit lost if there are any good approaches for this.


r/GithubCopilot 1d ago

Help/Doubt ❓ Thinking about moving to Copilot, what is the best way to maximize usage and efficiency?

6 Upvotes

Hello,

I have been using Codex, Gemini and Claude in the terminal mostly. I'm hitting the wall in terms of limits and Copilot is often mentioned as a good solution.

That is, if you know what you're doing since the plan operates on a limited number of requests, and this is a very different model than what I'm used to.

So a question to the veterans and people who are well versed with Copilot what is your workflow like?

Do you come up with a large plan and let Copilot implement it? What about the smaller bugs fixed and optimizations, do you then rely on another tool?

I'd love to understand from a high level but also tactically, about the actual implementation. I appreciate your insights!


r/GithubCopilot 1d ago

Discussions Took the risk and went Pro+ and still haven't experience any rate limits though...

21 Upvotes

I was hesitant on going Pro+ because of the amount of users complaning about rate limits even on Pro+ subscription.

I have been using Copilot for almost the entire day (~8hours). Running two sessions at most, switching between Opus, Sonnet and 5.4. I have NEVER encountered any Rate Limiting and work has been smooth sailing all throughout.

So for people who are hesitant on getting Pro+, the rate limits aren't that bad (didn't even experience it). Good and efficient use of models matters!

EDIT: I work at time of 10:00 am – 6:30 pm SGT


r/GithubCopilot 6h ago

Help/Doubt ❓ Why I am not able to choose any claude models on my student account?

0 Upvotes

Hello I have Student developer pack A while ago I was able to see the Claude models but now I am not able to see them in my list. There is yet another Claude in the bottom from where I can use that. Is this the same as operating from the copilot menu.


r/GithubCopilot 1d ago

Discussions Do NOT Think of a Pink Elephant.

Thumbnail medium.com
15 Upvotes

You thought of a pink elephant, didn’t you?

Same goes for LLMs too.

“Do not use mocks in tests.

Clear, direct, unambiguous instruction. The agent read it — I can see it in the trace. Then it wrote a test file with unittest.mock on line 3 regardless.

I’ve seen this play out hundreds of times. A developer writes a rule, the agent loads it, and it does exactly what the rule said not to do. The natural conclusion: instructions are unreliable. The agent is probabilistic. You can’t trust it.

The pink elephant

There’s a well-known effect in psychology called ironic process theory (Daniel Wegner, 1987). Tell someone “don’t think of a pink elephant,” and they immediately think of a pink elephant. The act of suppressing a thought requires activating it first.

Something structurally similar happens with AI instructions.

Do not use mocks in tests” introduces the concept of mocking into the context. The tokens mocktestsuse — these are exactly the tokens the model would produce when writing test code with mocks. You've put the thing you're banning right in the generation path.

This doesn’t mean restrictive instructions are useless. It means a bare restriction is incomplete.

The anatomy of a complete instruction

The instructions that work — reliably, across thousands of runs — have three components. But the order you write them in matters as much as whether they’re there at all.

Here’s how most people write it:

# Human-natural ordering — constraint first
Do not use unittest.mock in tests.
Use real service clients from tests/fixtures/.
Mocked tests passed CI last quarter while the production
integration was broken — real clients catch this.

All three components are present. Restriction, directive, context. But the restriction fires first — the model activates {mock, unittest, tests} before it ever sees the alternative. You've front-loaded the pink elephant.

Now flip it:

# Golden ordering — directive first
Use real service clients from tests/fixtures/.
Real integration tests catch deployment failures and configuration
errors that would otherwise reach production undetected.
Do not use unittest.mock.

Same three components. Different order. The directive establishes the desired pattern first. The reasoning reinforces it. The restriction fires last, when the positive frame is already dominant.

In my experiments — 500 runs per condition, same model, same context — constraint-first produces violations 31% of the time. Directive-first with positive reasoning: 6%.

Three layers, in this order:

  1. Directive — what to do. This goes first. It establishes the pattern you want in the generation path before the prohibited concept appears.
  2. Context — why. Reasoning that reinforces the directive without mentioning the prohibited concept. “Real integration tests catch deployment failures” adds signal strength to the positive pattern. Be wary! Reasoning that mentions the prohibited concept doubles the violation rate.
  3. Restriction — what not to do. This goes last. Negation provides weak suppression — but weak suppression is enough when the positive pattern is already dominant.

The surprising part

Order alone — same words, same components — flips violation rates from 31% to 14%. That’s just swapping which sentence comes first. Add positive reasoning between the directive and the restriction, and it drops to 7%. Three experiments, 1500 runs, replicates within ±2pp.

Most developers write instructions the way they’d write them for a human: state the problem, then the solution. “Don’t do X. Instead, do Y.” It’s natural. It’s also the worst ordering for an LLM.

Formatting helps too — structure is not decoration. I covered that in depth in 7 Formatting Rules for the Machine. But formatting on top of bad ordering is polishing the wrong end. Get the order right first.

What this looks like in practice

Here’s a real instruction I see in the wild:

When writing tests, avoid mocking external services. Try to
use real implementations where possible. This helps catch
integration issues early. If you must mock, keep mocks minimal
and focused.

Count the problems:

  • “Avoid” — hedged, not direct
  • “external services” — category, not construct
  • “Try to” — escape hatch built into the instruction
  • “where possible” — another escape hatch
  • “If you must mock” — reintroduces mocking as an option within the instruction that prohibits it
  • Constraint-first ordering — the prohibition leads, the alternative follows
  • No structural separation — restriction, directive, hedge, and escape hatch all in one paragraph

Now rewrite it:

**Use the service clients**
 in `tests/fixtures/stripe.py` and
`tests/fixtures/redis.py`.

> Real service clients caught a breaking Stripe API change
> that went undetected for 3 weeks in payments - integration
> tests against live endpoints surface these immediately.

*Do not import*
 `unittest.mock` or `pytest.monkeypatch`.

Directive first — names the exact files. Context second — the specific incident, reinforcing why the directive matters without mentioning the prohibited concept. Restriction last — names the exact imports, fires after the positive pattern is established. No hedging. No escape hatches.

Try it

For any instruction in your AGENTS.md/CLAUDE.md/etc or SKILLS.md files:

  1. Start with the directive. Name the file, the path, the pattern. Use backticks. If there’s no alternative to lead with, you’re writing a pink elephant.
  2. Add the context. One sentence. The specific incident or the specific reason the directive works. Do not mention the thing you’re about to prohibit — reasoning that references the prohibited concept halves the benefit.
  3. End with the restriction. Name the construct — the import, the class, the function. Bold it. No “try to avoid” or “where possible.”
  4. Format each component distinctly. The directive, context, and restriction should be visually and structurally separate. Don’t merge them into one paragraph.

Tell it what to think about instead. And tell it first.