Rendered at 14:02:07 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
eranation 2 days ago [-]
Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi... and select "show deprecated versions").
Same story for the malicious axios (@1.14.1 and @0.30.4, removed within ~3h), ua-parser-js (hours), and node-ipc (days). Wouldn't have helped with event-stream (sat for 2+ months), but you can't win them all.
~/.npmrc
min-release-age=7 # days
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
# not related to npm, but while at it...
~/.config/uv/uv.toml
exclude-newer = "7 days"
p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.
> Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
Most of these attacks don't make it into the upstream source, so solutions[1] that build from source get you ~98% of the way there. If you can't get a from-source build vs. pulling directly from the registries, can reduce risk somewhat with a cooldown period.
For the long tail of stuff that makes it into GitHub, you need to do some combination of heuristics on the commits/maintainers and AI-driven analysis of the code change itself. Typically run that and then flag for human review.
Build from source is a great idea, I assume you provide SLSA/sigstore like provenance as well?
arianvanp 2 days ago [-]
The chainguard folks built sigstore :)
eranation 1 days ago [-]
Yep yep, hence the ask, expected for containers, wondering if also for build from source.
abustamam 2 days ago [-]
I like the idea of a cool down. But my next question is would this have been caught if no one updated? I know in practice not everyone would be on a cool down. But presumably this comprise was only found out because a lot of people did update.
Ukv 2 days ago [-]
> presumably this comprise was only found out because a lot of people did update
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
vlovich123 1 days ago [-]
Better for the cool down to be managed guaranteed centrally by the package forge rather than ad-hoc by each individual client.
piaste 1 days ago [-]
The cooldown is a defence against malicious actors compromising the release infrastructure.
Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".
vlovich123 1 days ago [-]
I have not heard anyone seriously discuss that cooldown prevents compromise of the forge itself. It’s a concern but not the pressing concern today.
And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.
ornornor 1 days ago [-]
That’s tricky, sometimes you really need the new version to be available right away.
friendzis 1 days ago [-]
foobarizer>=1.2.3 vs foobarizer==1.2.5
vlovich123 1 days ago [-]
There’s ways to handle that. But that’s the exception, not the rule.
riteshnoronha16 1 days ago [-]
[dead]
kjok 2 days ago [-]
Cooldown sounds like a good idea ONLY IF these so called security companies can catch these malicious dependencies during the cooldown period. Are they doing this bit or individual researchers find a malware and these companies make headlines?
Groxx 1 days ago [-]
It seems less likely that they'll find it before you're bitten by it if you intentionally race against them by choosing newest all the time, yea?
yard2010 1 days ago [-]
Maybe we can let people that don't care about privacy try them first
otherme123 23 hours ago [-]
I am thinking about Django releases. They release a "Release Candidate", which you have to download by other means to test it. I rarely do it. But when a new official is out, I install it very easily in a testing environment and run my tests against it. I think this is what most people do, and the phase where supply attacks get caught, because in that 48 hour window all the tests in the world are run.
It's not a lack of care about privacy, the 7 days delay is like a new stage between RC and final release, where you pull for testing but not for production.
subarctic 2 days ago [-]
Does it matter? The individual researchers could look at brand-new published packages just the same
hunter2_ 1 days ago [-]
For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
me-vs-cat 1 days ago [-]
Novel obfuscation, with a novel idea, is hard to invent. Novel obfuscation, where it is only new to that codebase, is easy(ier) to flag as suspicious.
While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.
Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.
A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.
What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.
Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.
skybrian 2 days ago [-]
That assumes discovering a security bug is random and it could happen to anyone, so more shots on goal is better. But is that a good way to model it?
Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.
A cooldown gives anyone who does investigate more time to do their work.
pmichaud 2 days ago [-]
If I were in charge of a package manager I would be seriously looking into automated and semi automated exploit detection so that people didn't have to yolo new packages to find out if they are bad. The checking would itself become an attack vector, but you could mitigate that too. I'm just saying _something_ is possible.
slekker 2 days ago [-]
It's a trade off for sure, maybe if companies could have "honeypot" environments where they update everything and deploy their code, and try to monitor for sneaky things.
teiferer 2 days ago [-]
It's easy for malicious code to detect sandboxing.
Also, check out the VW Diesel scandal.
PunchyHamster 2 days ago [-]
Don't write anything backend or cli tool in NPM would be good start
MetaWhirledPeas 2 days ago [-]
Other package managers are magically immune?
c2h5oh 2 days ago [-]
They are not, but npm is uniquely bad in that regard. Refusal to implement security features that would have made attacks like this harder really doesn't help https://github.com/node-forward/discussions/issues/29
nirvdrum 2 days ago [-]
The lack of a comprehensive standard library for JavaScript also results in projects pulling many more third party dependencies than you would with most other modern environments. It’s just a bigger attack surface. And if you can compromise a module used for basic functionality that you’d get out of the box elsewhere, the blast radius will be enormous.
pico303 1 days ago [-]
Not to mention a culture of basically one-line packages ad infinitum. I downloaded a JS tool the other day to generate test reports and it had around 300 dependencies.
Needless to say I’m running all my JS tools in a Docker container these days.
wombatpm 1 days ago [-]
So why hasn’t someone created a batteries include JS library? I don’t program in JS on the backend so I don’t know how feasible something like that is.
stevekemp 1 days ago [-]
https://github.com/stdlib-js/stdlib was is one of several attempts at that, but yes the issue is that different people have very different views of what should be standard.
fc417fc802 1 days ago [-]
That doesn't seem like it should be an issue in practice? Rather than a single standard library endorsed by the language stewards if the community at large converges on a small handful of "standard" solutions that seems like it would satisfy the security aspect of things.
girvo 1 days ago [-]
Everyone’s ideas of what batteries should be included differ
wolfi1 1 days ago [-]
I, for one, root for AAA
hfsh 19 hours ago [-]
*cries in ghettoblaster and Maglite D cells*
erikerikson 24 hours ago [-]
Lodash but also, which batteries?
mayama 2 days ago [-]
You could write most of the cli tools using stdlib in python and go, without need for including hundreds of libraries even for trivial things.
NamlchakKhandro 1 days ago [-]
yes obviously.
isn't it obvious?
it should be obvious.
why isn't obvious?
ljm 1 days ago [-]
Security by obscurity. If another language became as ubiquitous as JS then it'd be the same.
In the context of TFA, don't rely on third party github actions that you haven't vetted. Most of them aren't needed and you can do the same with a few lines of bash. Which you can also then use locally.
2 days ago [-]
n_e 2 days ago [-]
> Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).
Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).
These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts
eranation 2 days ago [-]
Yep! depsguard sets trustPolicy: "no-downgrade" where applicable.
tadfisher 2 days ago [-]
Cooldowns are passing the buck. These are all caught with security scanning tools, and AI is probably going to be better at this than people going forward, so just turn on the cooldowns server-side. Package updates go into a "quarantine" queue until they are scanned. Only after scanning do they go live.
woodruffw 2 days ago [-]
"Just" is doing a lot of work; most ecosystems are not set up or equipped to do this kind of server-side queuing in 2026. That's not to say that we shouldn't do this, but nobody has committed the value (in monetary and engineering terms) to realizing it. Perhaps someone should.
By contrast, a client-side cooldown doesn't require very much ecosystem or index coordination.
tadfisher 2 days ago [-]
Yeah, I should work on avoiding that word.
woodruffw 2 days ago [-]
I think the rest of your analysis is correct! I'm only pushing back on perceptions that we can get there trivially; I think people often (for understandable reasons) discount the social and technical problems that actually dominate modernization efforts in open source packaging.
charcircuit 1 days ago [-]
>Perhaps someone should
This kind of thinking is why I don't trust the security of open source software. Industry standard security practices don't get implemented because no one is being paid to actually care and they are disconnected from the users due to not making income from them.
woodruffw 24 hours ago [-]
Having been in both worlds, I don't think the median unpaid OSS developer is any more (or less) dispassionate about security outcomes than the median paid SWE. There's lots of "maybe someone should do this" in both worlds.
(With that said, I think it also varies by ecosystem. These days, I think I can reasonably assert that Python has extended significant effort to stay ahead of the curve, in part because the open source community around Python has been so willing to adopt changes to their security posture.)
2 days ago [-]
pxc 2 days ago [-]
The approach you outline is totally compatible with an additional one or two day time gate for the artifact mirrors that back prod builds. Deploy in locked-down non-prod environments with strong monitoring after the scans pass, wait a few days for prod, and publicly report whatever you find, and you're now "doing your part" in real-time while still accounting for the fallibility of your automated tools.
There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".
eranation 2 days ago [-]
I agree, even without project glasswing (that Microsoft is part of) even with cheaper models, and Microsoft's compute (Azure, OpenAI collaboration), it makes no sense that private companies needs to scan new package releases and find malware before npm does. I'm sure they have some reason for it (people rely on packages to be immediately available on npm, and the real use case of patching a zero day CVE quickly), but until this is fixed fundamentally, I'd say the default should be a cooldown (either serverside or not) and you'll need to opt in to get the current behavior. This might takes years of deprecation though, I'm sure it was turned on now, a lot of things would break. (e.g. every CVE public disclosure will also have to wait that additional cooldown... and if Anthropic are not lying, we are bound for a tsunami of patched CVEs soon...)
tadfisher 2 days ago [-]
There are so many ways to self-host package repos that "immediate availability" to the wider npm-using public is a non-issue.
Exceptions to quarantine rules just invites attackers to mark malicious updates as security patches.
If every kind of breakage, including security bugs, results in a 2-3 hour wait to ship the fix, maybe that would teach folks to be more careful with their release process. Public software releases really should not be a thing to automate away; there needs to be a human pushing the button, ideally attested with a hardware security key.
NetMageSCW 1 days ago [-]
Isn’t the problem with a minimum age release that the opposite would also occur - a high priority fix of zero day under exploit wouldn’t be fixed and you could be compromised in the window?
eranation 1 days ago [-]
It is! It’s a tough problem to balance. The good news is that you can always override for specific cases. Linking to my other reply here: https://news.ycombinator.com/item?id=47880149
personalcompute 2 days ago [-]
Regarding doing more than just a minimum release age: The tool I personally use is Aikido "safe-chain". It sets minimum release age, but also provides a wrapper for npm/uv/etc where upon trying to install anything it first checks each dependency for known or suspected vulnerabilities against an online commercial vulnerability database.
neya 2 days ago [-]
Stop using Javascript. Or Typescript or whatever excuses they have for the fundamentally flawed language that should have been retired eons ago instead of trying to get it fixed. Javascript, its ecosystem has always been a pack of cards. Time and again it has been proven again. I think this is like the 3rd big attack in the last 30 days alone.
eranation 1 days ago [-]
Yes but it has nothing to do with the language, and everything to do with the ecosystem (npm tried to make thing such as mandatory MFA etc, npmjs is so big maintainers pushed back)
TypeScript on its own is a great language, with a very interesting type system. Most other type systems can’t run doom.
it is being attacked precisely because it is ubiquitous. no one is going to attack haskell, erlang or whatever no one uses.
fdsajfkldsfklds 2 days ago [-]
Never, ever type "npm -i". This advice has served me well for many years.
nl 2 days ago [-]
> ~/.config/uv/uv.toml
> exclude-newer = "7 days"
Note the if you get
failed to parse year in date "7 days": failed to parse "7 da" as year (a four digit integer): invalid digit, expected 0-9 but got
then comment out the exclude and run
uv self update
duskdozer 1 days ago [-]
Update dependencies when you need something in them, not just because there's a new version available.
nedt 1 days ago [-]
But then at the same time you should always update because it might fix a security vulnerability. Otherwise you end up running nodejs 10 because you don't need the new stuff.
duskdozer 24 hours ago [-]
Or it might introduce one. But sure, a security fix for a known vulnerability could count as something you need in a new version. Ideally they would be backported and separated from feature updates. The constant dependency churn and single-channel update stream is kind of why a lot of vulnerabilities become problems in the first place.
kippinsula 1 days ago [-]
we've been running Renovate with `minimumReleaseAge: '7 days'` across all our repos for a while now, which does basically the same thing across npm, PyPI, and Cargo in one config. the tradeoff is you're always 7 days behind on patches, but for anything touching CI or secrets tooling that feels like a fair deal. the nasty part of this class of attack is the timing window is usually sub-24h before it's pulled, so even 3 days would have caught this one.
fauigerzigerk 2 days ago [-]
I use a separate dev user account (on macOS) for package installations, VSCode extensions, coding agents and various other developer activities.
I know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.
bananadonkey 2 days ago [-]
Check your home folder permissions on macos, last time I checked mine were world readable (until I changed them). I was very surprised by it, and only noticed when adding an new user account for my wife.
fauigerzigerk 1 days ago [-]
I noticed that too (and changed it). The home folder appears to be world readable because otherwise sharing via the Public folder wouldn't work. The folders where the actual data lives are not world readable.
I think this is a bad idea, because it means the permissions of any new folders have to be closely guarded, which is easy to forget.
hombre_fatal 2 days ago [-]
Maybe using a slower, stable package manager that still gets security/bug fixes, like nix.
madduci 2 days ago [-]
Renovate can do it as well
eranation 2 days ago [-]
Yep, depsguard has support for renovate and dependabot cooldown settings too.
nikcub 1 days ago [-]
compartmentalize. I do development and anything finance / crypto related / sensitive on separate machines.
If you're brave you can run whonix.
The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.
Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).
dirtbag__dad 2 days ago [-]
I guess this is the case for new installs, but for existing dependencies can’t you simply pin them to a patch release, and point at the sha?
fragmede 2 days ago [-]
But how do you know which one is good? If foo package sends out an announcement that v1.4.3 was hacked, upgrade now to v1.4.4 and you're on v1.4.3, waiting a week seems like a bad idea. But if the hackers are the one sending the announcement, then you'd really want to wait the week!
throw1230 2 days ago [-]
malicious versions are recalled and removed when caught - so you don't need to update to the next version
dwattttt 2 days ago [-]
An announcement isn't a quiet action. One would hope that the real maintainers would notice & take action.
pxc 2 days ago [-]
Install tools using a package manager that performs builds as an unprivileged user account other than your own, sandboxes builds in a way that restricts network and filesystem access, and doesn't run let packages run arbitrary pre/post-install hooks by default.
Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.
If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.
Learn how your tools actually build. Build your own containers.
Learn how your tools actually run. Write your own CI templates.
My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.
Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:
- write and distribute CI templates
- run caches, proxies, and artifact repositories which might create room to
- pull through packages on a delay
- run automated scans on updates and flag packages for risks?
- maybe block other package sources to help prevent devs from shooting themselves in the foot with misconfiguration
- set up shared infrastructure for CI runners that
- use such caches/repos/proxies by default
- sandbox the network for build$
- help replace or containerize or sandbox builds that currently only run on bare metal on some aging Jenkins box on bare metal
- provide docs
- on build sandboxing tools/standards/guidelines
- on build guidelines surrounding build tools and their behaviours (e.g., npm ci vs npm install, package version locking and pinning standards)
- promote packaging tools for development environments and artifact builds, e.g.,
- promote deterministic tools like Nix
- run build servers that push to internal artifact caches to address trust assumptions in community software distributions
- figure out when/whether/how to delegate to vendors who do these things
I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.
4ndrewl 2 days ago [-]
The problem with cooldowns is that the more people use them, the less effective they become.
12_throw_away 2 days ago [-]
The hypothesis you're referring to is something like "if everyone uses a 7-day cooldown, then the malware just doesn't get discovered for 7 days?", right?
An alternative hypothesis: what if 7-day cooldowns incentivize security scanners, researchers, and downstream packagers to race to uncover problems within an 7-day window after each release?
Without some actual evidence, I'm not sure which of these is correct, but I'm pretty sure it's not productive to state either one of these as an accepted fact.
4ndrewl 21 hours ago [-]
Yes, what if it does incentivize security scanners, or maybe it won't.
Either way there will be fewer eyes on it.
eranation 2 days ago [-]
Well, luckily, those who find the malicious activity are usually companies who do this proactively (for the good of the community, and understandably also for marketing). There are several who seem to be trying to be the first to announce, and usually succeed. IMHO it should be Microsoft (as owners of GitHub, owners of npm) who should take the helm and spend the tokens to scan each new package for malicious code. It gets easier and easier to detect as models improve (also gets easier and easier to create, and try to avoid detection on the other hand)
somehnguy 2 days ago [-]
That was my first instinct as well but I'm not sure how true it really is.
Many companies exist now whose main product is supply chain vetting and scanning (this article is from one such company). They are usually the ones writing up and sharing articles like this - so the community would more than likely hear about it even if nobody was actually using the package yet.
> This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
2 days ago [-]
pdntspa 23 hours ago [-]
This seems pretty sensible. Do we really need updates the day they drop?
doctorpangloss 2 days ago [-]
Haha what if there's an urgent security fix in an updated package?
edf13 2 days ago [-]
Manually review the package and override the setting
doctorpangloss 2 days ago [-]
The flaw of the cooldown solution speaks for itself.
bonzini 2 days ago [-]
Still it's something like a second factor (or even, literally, overriding might require 2FA).
2 days ago [-]
eranation 2 days ago [-]
Yep, that's the main argument against cooldowns, but there are ways to override them. I'll update the docs soon.
ruuda 2 days ago [-]
https://github.com/doy/rbw is a Rust alternative to the Bitwarden CLI. Although the Rust ecosystem is moving in NPM's direction (very large and very deep dependency trees), you still need to trust far fewer authors in your dependency tree than what is common for Javascript.
326 packages right now when doing a build. Seems large in general, but for a Rust project, not abnormal.
Takes what, maybe 15 seconds to compile on a high-core machine from scratch? Isn't the end of the world.
Worse is the scope to have to review all those things, if you'd like to use it for your main passwords, that'd be my biggest worry. Luckily most are well established already as far as I can tell.
dijit 1 days ago [-]
Why are you talking about compile times in a thread about supply chain security.
326 packages is approximately 326 more packages than I will ever fully audit to a point where my employer would be comfortable with me making that decision (I do it because many eyes make bugs shallow).
It's also approximately 300 more than the community will audit, because it will only be "the big ones" that get audited, like serde and tokio.
I don't see people rushing to audit `zmij` (v1.0.19), despite it having just as much potential to backdoor my systems as tokio does.
elAhmo 2 days ago [-]
"326 seems large, but not abnormal" was the state of JS in the past as well.
Chance of someone auditing all of them is virtually zero, and in practice no one audits anything, so you are still effectively blindly trusting that none of those 326 got compromised.
seanw444 2 days ago [-]
It is baffling to me that a language that is as focused on safety/security as Rust decided to take the JavaScript approach to their ecosystem. I find it rather contradictory.
jeroenhd 1 days ago [-]
I doubt Microsoft's kernel/system Rust code is pulling in a lot of crates. The Linux kernel sure isn't, and Android's Bluetooth stack doesn't seem to either.
Using crates is a choice. You can write fully independent C++ or you can pull in Boost + Qt + whatever libraries you need. Even for C programs, I find my package manager downloading tons of dependencies for some programs, including things like full XML parsers to support a feature I never plan to use.
Javascript was one of the first languages to highlight this problem with things like left-pad, but the xz backdoor showed that it's also perfectly possible to do the same attack on highly-audited programs written in a system language that doesn't even have a package manager.
embedding-shape 2 days ago [-]
That's because you're mixing things. "Rust the language" isn't the one starting new projects and add new dependencies that have hundreds of dependencies of their own, this is the doing of developers. The developers who built Rust with a focus on safety and security is not the same developers mentioned before.
mort96 2 days ago [-]
Rust and Cargo are, if not inseparable, at least tightly connected. Rust and Rust's stdlib are inseparable.
Cargo is modeled after NPM. It works more or less identically, and makes adding thousands of transient dependencies effortless, just like NPM.
Rust's stdlib is pretty anemic. It's significantly smaller than node's.
These are decisions made by the bodies governing Rust. It has predictable results.
nathanmills 2 days ago [-]
ohh noo, the devs gave users a choice instead of forcing their hand..
mort96 1 days ago [-]
Design decisions have predictable consequences. Large masses of people, who make up an ecosystem like the that of a programming language community, respond predictably to their environment. Each individual programmer has a choice, sure, but you can't just "individual responsibility" your way out of the predictable consequences of incentive structures.
seanw444 2 days ago [-]
That's true. But it does seem like a logic result of having no real standard library. That lone fact has kept me away from Rust for real projects, because I don't want to pull in a bunch of defacto-standard-but-not-officially dependencies for simple tasks. That's probably a large contributor to the current state of dependency bloat.
embedding-shape 2 days ago [-]
Yeah, it does require you to be meticulous about what you depend on. Personally I stick with libraries that don't use 100s of other crates, and tried and reviewed various libraries over the year, so you have your "toolkit" of libraries you know are well built and you know how they work internally.
Ultimately in any language you get the sort of experience you build for yourself with the environment you setup, it is possible in most languages to be more conservative and minimal even if the ecosystem at large is not, but it does require more care and time.
wongarsu 2 days ago [-]
'no real standard library' doesn't seem entirely fair. Rust has a huge standard library. What it does have is the policy to only include "mature" things with little expected API evolution in the standard libary, which leaves gaping holes where a json parser, a http client or a logging library should be. Those are all those defacto-standard-but-not-officially dependencies
NetMageSCW 1 days ago [-]
Perhaps that’s just a sign Rust isn’t suitable for those type of projects.
pdimitar 24 hours ago [-]
It's a sign that they learned from Python more than anything else. Better be conservative than have Python's situation of multiple versions of those common functionalities (in the stdlib) that almost nobody uses and goes for 3rd party libraries anyway. Is that a better state of affairs?
The Rust vs. Node comparison seems very shallow to me, and it seems to require a lot of eye squinting to work.
People have beef with Rust in other, more emotional ways, and welcome the opportunity to pretend they dislike it on seemingly-rational grounds a la "Node bad amirite lol".
pdimitar 24 hours ago [-]
So you only use languages that have a built-in HTTP client, JSON / CSV / XML parsers, and such?
latexr 20 hours ago [-]
I’m not the person you asked, but given the choice I avoid a language without JSON parsing officially supported because I need that frequently. It’s the reason I never picked up Lua, despite being interested in it.
pdimitar 20 hours ago [-]
Interesting, thanks for sharing your anecdote. Upvoted.
I am openly admitting I don't care. Such libraries are in a huge demand and every programming language ecosystem gains them quite early. So to me the risk of malicious code in them is negligibly small.
latexr 18 hours ago [-]
To me it’s not just the risk of malicious code, but also convenience. For example, if I’m using a scripted language and sharing it in some form with users, I don’t want to have to worry about keeping the library updated, and fight with the package manager, and ship extraneous files, and…
pdimitar 18 hours ago [-]
Ah, I don't work with scripting languages though. I understand the difference in usages. Your use-case is quite valid.
DarkUranium 1 days ago [-]
I think there's actually a decent compromise one could make here (not that Rust did, mind), and it's what I'm planning for my own language, in big part to avoid the Rust/NPM/etc situation.
TL;DR, the official libraries are going to be split into three parts:
---
1) `core.*` (or maybe `lang.*` or `$MYLANGUAGE.*` or w/e, you get the point) this is the only part that's "blessed" to be known by the compiler, and in a sense, part of the compiler, not a library. It's stuff like core type definitions, interfaces, that sort of stuff. I may or may not put various intrinsics here too (e.g. bit count or ilog2), but I don't know yet.
Reserved by the compiler; it will not allow you to add custom stuff to it.
There is technically also a "pseudo-package" of `debug.*` ("pseudo" in the sense that you must always use it in the full prefixed form, you can't import it), which is just going to be my version of `__LINE__` and similar. Obviously blessed by compiler by necessity, but think stuff like `debug.file` (`__FILE__`), `debug.line` (`__LINE__`), `debug.compiler.{vendor,version}` (`__GNUC__`, `_MSC_VER`, and friends). `debug` is a keyword, which makes it de-facto non-overridable by users (and also easy for both IDEs and compiler to reason about). Of course I'll provide ways of overriding these, as to not leak file paths to end users in release builds, etc.
(side-note: since I want reproducible builds to be the default, I'm internally debating even having a `debug.build.datetime` or similar ... one idea would be to allow it but require explicitly specifying a datetime [as build option] in such cases, lest it either errors out, or defaults to e.g. 1970-01-01 or 2000-01-01 or whatever for reproducibility)
---
2) `std.*`, which is minimal, 100% portable (to the point where it'd probably even work in embedded [in the microcontroller sense, not "embedded Linux" sense] systems and such --- though those targets are, at least for now, not a primary goal), and basically provides some core tooling.
Unlike #1, this is not special to the compiler ... the `std.*` package is de jure reserved, but that's not actually enforced at a technical level. It's bundled with the language, and included/compiled by default.
As a rule (of thumb, admittedly), code in it needs to be inherently portable, with maybe a few exceptions here or there (e.g. for some very basic I/O, which you kind of need for debugging). Code is also required to have no external (read: native/upstream) dependencies whatsoever (other than maybe libc, libdl, libm, and similar things that are really more part of the OS than any particular library).
All of `std.*` also needs to be trivially sandboxable --- a program using only `core.*` & `std.*` should not be able to, in any way, affect anything outside of whatever the host/parent system told it that it can.
---
3) `etc.*`, which actually work a lot like Rust/Cargo crates or npm packages in the sense that they're not installed by default ..... except that they're officially blessed. They likely will be part of a default source distribution, but not linked to by default (in other words: included with your source download, but you can't use them unless you explicitly specify).
This is much wider in scope, and I'm expecting it to have things like sockets, file I/O (hopefully async, though it's still a bit of a nightmare to make it portable), downloads, etc. External dependencies are allowed here --- to that end, a downloads API could link to libcurl, async I/O could link to libuv, etc.
---
Essentially, `core.*` is the "minimal runtime", `std.*` is roughly a C-esque (in terms of feature count, or at least dependencies) stdlib, and `etc.*` are the Python-esque batteries.
Or to put it differently: `core.*` is the minimum to make the language run/compile, `std.*` is the minimum to make it do something useful, and `etc.*` is the stuff to make common things faster to make. (roughly speaking, since you can always technically reimplement `std.*` and such)
I figured keeping them separate allows me to provide for a "batteries included, but you have to put them in yourself" approach, plus clearly signaling which parts are dependency-free & ultra-sandbox-friendly (which is important for embedding in the Lua/JavaScript sense), plus it allows me to version them independently in cases of security issues (which I expect there to be more of, given the nature of sockets, HTTP downloads, maybe XML handling, etc).
atdt 1 days ago [-]
What exactly would you have done differently?
Cargo made its debut in 2014, a year before the infamous left-pad incident, and three years before the first large-scale malicious typosquatting attacks hit PyPI and NPM. The risks were not as well-understood then as they are today. And even today it is very far from being a solved problem.
Ferret7446 1 days ago [-]
Yet Go is half a decade older and seems to have handled the situation much better.
pdimitar 24 hours ago [-]
How does it handle better, exactly?
mayama 10 hours ago [-]
You can write simple http server or rest client with stdlib in Go. No need to include tokio, serde and hundred other cargos which constantly break things. I have apps written in Go more than a decade ago work the same now with recent version of Go. Where as, I had issues with getting few year old github apps in rust compiling and working in rust.
cromka 2 days ago [-]
Same here.
Ferret7446 1 days ago [-]
> 326 packages right now when doing a build. Seems large in general, but for a Rust project, not abnormal.
That's a damning indictment of Rust. Something as big as Chrome has IIRC a few thousand dependencies. If a simple password manager CLI has hundreds, something has gone wrong. I'd expect only a few dozen
xvedejas 2 days ago [-]
Does this take into account feature flags when summing LOC? It's common practice in Rust to really only use a subset of a dependency, controlled by compile-time flags.
saghm 2 days ago [-]
My experience has been that while there's significant granularity in terms of features, in practice very few people actively go out of their way to prune the default set because the ergonomics are kind of terrible, and whether or not the default feature set is practically empty or pulls in tons of stuff varies considerably. I felt strongly enough about this that I wrote up my only blog post on this a bit over a year ago, and I think most of it still applies: https://saghm.com/cargo-features-rust-compile-times/
gsnedders 2 days ago [-]
Also just unit tests in the source files, which again aren’t included in the binary via compile-time flags!
traderj0e 2 days ago [-]
For a given tool, I'd expect the Rust version to have even more deps than the JS version because code reuse is more important in a lower-level language. I get the argument that JS users are on average less competent than Rust users, but we're talking about authors who build serious tools/libs in the first place.
saghm 2 days ago [-]
> At least they're pinned though.
Frustratingly, they're not by default though; you need to explicitly use `--locked` (or `--frozen`, which is an alias for `--locked --offline`) to avoid implicit updates. I've seen multiple teams not realize this and get confused about CI failures from it.
The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
LegionMammal978 2 days ago [-]
> The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
Not quite: "1.2.3" = "^1.2.3" = ">=1.2.3, <2.0.0" in Cargo [0], and "1.2" = "^1.2.0" = ">=1.2.0, <2.0.0", so you get the "1.x.x" behavior either way. If you actually want the "1.2.x" behavior (e.g., I've sometimes used that behavior for gmp-mpfr-sys), you should write "~1.2.3" = ">=1.2.3, <1.3.0".
I don't know how I got this wrong because I literally went and looked at that page to try to remind myself, but I somehow misread it, because you're definitely right. This probably isn't the first time I've gotten this wrong either.
From thinking it through more closely, it does actually seem like it might be a little safer to avoid specifying the patch version; it seems like putting 1.2.3 would fail to resolve any valid version in the case that 1.2.2 is the last non-yanked version and 1.2.3 is yanked. I feel like "1.2.3" meaning "~1.2.3" would have been a better default, since it at least provides some useful tradeoff compared to "1.2", but with the way it actually works, it seems like putting a full version with no operator is basically worse than either of the other options, which is disappointing.
tombh 1 days ago [-]
Are we talking about `cargo build` here? Because my understanding is that if a lockfile is present and `Cargo.toml` hasn't changed since the lockfile was created then the build is guaranteed to use the versions in the lockfile.
If however `Cargo.toml` has changed then `cargo build` will have to recalculate the lockfile. Hence why it can be useful to be explicit about `cargo build --locked`.
subarctic 2 days ago [-]
Is there a plan to change this? I don't see why --locked shouldn't be the default
saghm 2 days ago [-]
I haven't heard anything about this, but I really wish it was there by default. I don't think the way it works right now fits anyone's expectations of what the lockfile is supposed to do; the whole point of storing the resolved versions in a file is to, well, lock them, and implicitly updating them every time you build doesn't do that.
wycats 2 days ago [-]
As one of the original authors of Cargo, I agree. lockfiles are for apps and CLIs are apps. QED.
saghm 2 days ago [-]
Since you're here, and you happened to indirectly allude to something that seems to have become increasingly common in the Rust world nowadays, I can't help but be curious about your thoughts on libraries checking their lockfiles into version control. It's not totally clear to me exactly when or why it became widespread, but it used to be relatively rare for me to see in open source libraries in the first few post-1.0 years of Rust, whereas at this point I think it's more common for me to see than not.
Do you think it's an actively bad practice, completely benign, or something in between where it makes sense in some cases but probably should still be avoided in others? Offhand, the only variable I can think of that might influence a different choice is that maybe closed-source packages been reused within a company (especially if trying to interface with other package management systems, which I saw firsthand when working at AWS but I'm guessing is something other large companies would also run into), but I'm curious if there are other names nuances I haven't thought of
masklinn 1 days ago [-]
> It's not totally clear to me exactly when or why it became widespread
It should be fine to do this according to semver as long as the major version is above zero.
saghm 2 days ago [-]
Sure, but according to semver it's also totally fine to change a function that returns a Result to start returning Err in cases that used to be Ok. Semver might be ae to project from your Rust code not compiling after you update, but it doesn't guarantee it will do the same thing the next time you run it. While changes like that could still happen in a patch release, I'd argue that you're losing nothing by forgoing new API features if all you're doing is recompiling the existing code you have without making any changes, so only getting patches and manually updating for anything else is a better default. (That said, one of the sibling comments pointed out I was actually wrong about the implicit behavior of Cargo dependencies, so what I recommended doesn't protect from anything, but not for the reasons it sounds like you were thinking).
Some people might argue that changing a function to return an error where it didn't previously would be a breaking change; I'd argue that those people are wrong about what semver means. From what I can tell, people having their own mental model of semver that conflicts with the actual specification is pretty common. Most of the time when I've had coworkers claim that semver says something that actively conflicts with what it says, after I point out the part of the spec that says something else, they end up still advocating for what they originally had said. This is fine, because there's nothing inherently wrong with a version schema other than semver, but I try to push back when the term itself gets used incorrectly because it makes discussions much more difficult than they need to be.
vablings 2 days ago [-]
Wait, you're telling me that node deps are not pin by default. Every time you run your code you might be pulling in a new version.
Because they could have a security flaw that might compromise your project or any users of it.
vablings 23 hours ago [-]
For any of my rust projects I really don't bump my deps unless dependabot shows a serious vulnerability or I want to use a new feature added. Outside of that my deps are locked to the last known good version i use.
ramon156 2 days ago [-]
This + vaultwarden is an awesome self-hostable rust version of bitwarden. We might as well close the loop!
yangikan 2 days ago [-]
Is there any downside to using the firefox builtin password manager?
saghm 2 days ago [-]
Does it support autofill for other apps on mobile? I'd argue that putting passwords in your phone clipboard could itself be risky (although for someone who's extremely security conscious, maybe discouraging using apps isn't a downside)
bfivyvysj 2 days ago [-]
Reddit is always pasting clipboard.
hsbauauvhabzb 2 days ago [-]
So uninstall Reddit? That app is spyware at best and malware at worst.
saghm 2 days ago [-]
I'm guessing you meant to respond to the sibling comment rather than mine
hsbauauvhabzb 2 days ago [-]
Yes, weirdly enough at the time there was no reply button, I thought HN comments had a maximum nested depth, but now it has a reply button and so does yours. Weird.
saghm 2 days ago [-]
Ah, no worries! Replies seem to get throttled sometimes when the site detects a lot of nested replies quickly and it intentionally delays the ability to reply a bit. I've always assumed that it's intended as a way to try to mitigate threads that potentially are devolving into flamewars.
cromka 2 days ago [-]
It's a bit ironic that everyone considers Rust as safer while completely ignoring the heavily increased risk of pulling in malware in dependencies.
pdimitar 24 hours ago [-]
Different things. "Rust is safer" generally means memory safety i.e. no double-free, no use-after-free, no buffer-/under-flows, and the like. The safety you seem to have in mind is "minimal dependency count".
koyote 2 days ago [-]
I wonder if this is going to push more software to stacks like .Net where you can do most things with zero third-party dependencies.
Or, conversely, encourage programming languages to increase the number of features in their standard libraries.
saghm 2 days ago [-]
A few months ago I tried to build a .NET package on Linux, and the certificate revocation checks for the dependencies didn't complete even after several minutes. Eventually I found out about the option `NUGET_CERTIFICATE_REVOCATION_MODE=offline`, which managed to cause the build to complete in a sane amount of time.
It's hard for me to take seriously any suggestion that .NET is a model for how ecosystems should approach dependency management based on that, but I guess having an abysmal experience when there are dependencies is one way to avoid risks. (I would imagine it's probably not this bad on Windows, or else nobody would use it, but at least personally I have no interest in developing on a stack that I can't expect to work reliably out of the box Linux)
mayama 2 days ago [-]
go and python exits with sane stdlib and are already used extensively
infogulch 2 days ago [-]
Oh nice it works as an ssh-agent too. Definitely checking this one out.
guywithahat 2 days ago [-]
That’s my concern too. Rust has the same dependency concerns, which is how hackers get into code. VaultWarden has the same Rust dependency concern. Ironically we’re entering an age where C/C++ seems to have everything figured out from a dependency injection standpoint
saghm 2 days ago [-]
Now all they need to figure out is how to actually make the C/C++ code that isn't from dependencies secure and they'll be all set
ef2k 2 days ago [-]
The issue was a compromised build pipeline that shipped a poisoned package.
But PSA: If something is critical to the business and you’re using npm, pin your dependencies. I’ve had this debate with other devs throughout the years and they usually point to the lockfile as assurance, but version ranges with a ^ mean that when the lockfile gets updated, you can pull in newer versions you didn’t explicitly choose.
If what you're building can put your company out of business it's worth the hassle.
jbverschoor 1 days ago [-]
This is one reason why we have lock files / version pinning
fragmede 2 days ago [-]
But it goes the other way too. If there's a security vulnerability that was fixed in a later version, you want the system to automatically pick that up and apply it for you in an ideal scenario.
TranquilMarmot 1 days ago [-]
Even with ^ you won't get an updated version until somebody runs an install and updates the lockfile.
At this point, the risk of a compromised package outweighs the risk of an upstream vuln that actually matters. Npm audit is full of junk like client side redos vulns, you could probably ignore 90%+ of the reports and still be secure against the majority of of-concern attack classes.
bfivyvysj 2 days ago [-]
Why would you patch a security vuln in a later version? Should be patched in all versions.. that's what semver is for.
pavon 1 days ago [-]
A patch updates is a newer version, and they are just as likely to be compromised by supply chain attacks as minor or major updates.
dsl 1 days ago [-]
Not exactly.
Security patches aren't like bugs or features where you can just roll a new version. Often patches need to be backported to older versions allowing software and libraries to be "upgraded" in place with no other change introduced.
Say you had software that controlled the careful mix of chemicals introduced into a municipal water supply. You just don't move from version 1.4 to 3.2, you fix 1.4 in place.
thfuran 1 days ago [-]
No, you create version 1.4.19, which fixes a bug in 1.4.18.
raincole 2 days ago [-]
Who is 'you' here? All of the npm package maintainers?
Yes, if they all just backport security patches we'll be fine. No, people are not going to just.
jpleger 2 days ago [-]
Ah yes the incredibly common practice of... checks notes backporting security packages in node packages.
kijin 2 days ago [-]
Semver doesn't help if you just declare all older versions EOL.
What you're looking for are Debian stable packages. :p
kronks 2 days ago [-]
[dead]
1024kb 2 days ago [-]
I had a really bad experience with the bitwarden cli. I believe it was `bw list` that I ran, assuming it would list the names of all my passwords, but too my surprise, it listed everything, including passwords and current totp codes. That's not the worst of it though. For some reason, when I ssh'ed into one of my servers and opened tmux, where I keep a weechat irc client running, I noticed that the entire content of the bw command was accessible from within the weechat text input field history. I have no idea how this happened, but it was quite terrifying. The issue persisted across tmux and weechat sessions, and only a reboot of the server would solve the problem.
I promptly removed the bw cli programme after that, and I definitely won't be installing it again.
I use ghostty if it matters.
stvnbn 2 days ago [-]
I love how the first comment is a complain having nothing to do with the actual subjec
epistasis 2 days ago [-]
Password managers are all about trust, the main link is about a compromise, so it's not surprising that the first comment is also about trust too, even if it's not directly about this particular compromise.
I found the default bwcli clunky and unacceptable, and it's why I don't use it, even though I still have a BitWarden subscription.
harshreality 2 days ago [-]
Where's the evidence that 1024kb's issue had anything to do with bw? How is that vaguely recalled anecdote a trust issue with bw? It was probably caused by accidentally copying something to the clipboard or some other buffer which was then transferred via ssh and imported into weechat, possibly with the help of custom terminal, ssh, tmux, or weechat settings making it too easy for data to be slung around like that.
I can't think of a plausible explanation for how bw is at fault for its terminal output ending up, across a ssh session and tmux invocation, in the chat history of weechat. Even if bw auto-copied its output to the clipboard (which as far as I could tell by glancing at the cli options, it doesn't and can't), and the clipboard is auto-copied to remote hosts, clipboard contents shouldn't appear in an irc client's history without explicit hacking to do that.
The claim is just noise, particularly because it doesn't seem to have ever been investigated.
It seems prudent, if someone wants to use a cli, to use rbw rather than bw, or even just pass or keypassxc-cli (and self-managed cloud backup or syncing). However, that's based on bw being a javascript mess, not based on the unlikely event of bw injecting its output through ssh into irc clients.
2 days ago [-]
cobolcomesback 2 days ago [-]
Not to mention utter nonsense. There’s no possible way that BW CLI somehow injected command history into a remote server. That was 100% something the GP did, a bug in their terminal, or a config they have with ssh/tmux, not Bitwarden.
reactordev 2 days ago [-]
that's our future... with AI. Engineers that don't know the difference between client-side convenience and server-side injection, how to configure `php.ini`, or that no synchronized password manager is safe. While the OAuth scope is `*`, and CORS is what you drink on the weekend.
Sohcahtoa82 2 days ago [-]
Can someone explain why people struggle with CORS?
The full strength of the SOP applies by default. CORS is an insecurity feature that relaxes the SOP. Unless you need to relax the SOP, you shouldn't be enabling CORS, meaning you shouldn't be sending an Access-Control-Allow-Origin header at all.
If your front-end at www.example.com makes calls to api.example.com, then it's simple enough to just add www.example.com to CORS.
12_throw_away 2 days ago [-]
IME, CORS is pretty straightforward in prod but can be a huge pain in dev environments, so you end up with lots of little hacks to get your dev environments working (and then one of those hacks leaks back into prod and now you have CORS problems in prod).
reactordev 2 days ago [-]
This. This is a result of not having proper environments and engineering practices in place and so the team or engineer is free to just wing it and add hacks around security best practices because the Security Team (tm) is elsewhere and they never understand the ask. They know PKI and certificates, access card identity, maybe Cisco for their "cyber security" but that's usually where it ends. Yet somehow, they are in charge of CORS and TLS and Sast/Dast scans and everything else that should be baked into the pipelines and process. Resulting in an engineer saying f'it and adding an `if localhost` hack or something. CORS is one example but there are many others in pretty much every area of security. OAuth, CORS, LDAP, Secrets, Hashing, TOTP, you name it. Each has a plethora of packages and libraries that can "do" the thing but it always becomes a hairball mess to the dev because they never understood it to begin with.
fragmede 2 days ago [-]
That simple prod example isn't where people struggle with CORS. It's during development and I've got assets on Cloudflare and AWS and GCP and localhost:3000 and localhost:8000, and localhost:3001 and then a VM in Hetner at API.example.com because why not, that shit gets complicated and people get confused and lost. I mean, yeah, don't do that, but CORS gets complicated once the project gets enough teams involved.
tcoff91 1 days ago [-]
I’ve found that the best way to deal with this is to add an entry to /etc/hosts for my local machine that fits the pattern for QA environment. Then I run a local reverse proxy with a self signed certificate.
Care to elaborate? I'd agree that the security/availability tradeoff is different, but "not safe" is as nonsensical a blanket statement as "all/only offline/paper-based/... password managers are safe".
renewiltord 2 days ago [-]
Probably terminal emulator is like iTerm2 and double click to select and copy to clipboard is feature.
nicce 2 days ago [-]
I thought that CLI would be efficent when I looked for using it and then I figured it is JavaScript
rvz 2 days ago [-]
Exactly. That is the problem.
There is a time and place for where it makes sense and a password manager CLI written in TypeScript importing hundreds of third-party packages is a direct red flag. It is a frequent occurrence.
We have seen it happen with Axios which is one of the biggest supply chain attacks on the Javascript / Typescript ecosystem and it makes no sense to build sensitive tools with that.
lxgr 2 days ago [-]
> importing hundreds of third-party packages
But how else are you going to check if a number is even or odd? Remember, the ONLY design goal is not repeating yourself (or in fact anything anyone has ever thought of implementing).
dannyw 2 days ago [-]
That’s a serious red flag. I’m concerned and I don’t think it shows a security first culture.
2 days ago [-]
trinsic2 2 days ago [-]
Wow. Thats crazy. Is there an extension for bwcli in weechat? BTW I didnt even know BW had a cli until now. I use keepass locally.
harshreality 2 days ago [-]
It's crazy because it's not default bw behavior, or even any bw behavior... I don't use the cli, but I don't see any built-in capacity to copy bw output to the clipboard. (In the UNIX way, you'd normally pipe it to a clipboard utility if you wanted it copied, and then the security consequences are on you.)
They probably caused it themselves, somehow, and then blamed bitwarden. Note in the original comment they aren't even entirely sure what the command was, and they weren't familiar with it or they wouldn't have been surprised by its output... so how can they be sure what else they did between that command and the weechat thing?
If the terminal or tmux fed terminal history into weechat, that's also not bw's problem.
I know this because I had the same surprised reaction
martin- 20 hours ago [-]
No one is disputing that part. It's the "copied into clipboard automatically" part that sounds implausible.
1024kb 2 days ago [-]
I don't know, I use a vanilla weechat setup
2 days ago [-]
flossly 2 days ago [-]
Never used the CLI, but I do use their browser plugin. Would be quite a mess if that got compromised. What can I do to prevent it? Run old --tried and tested-- versions?
Quite bizarre to think much much of my well-being depends on those secrets staying secret.
zerkten 2 days ago [-]
Integration points increase the risk of compromise. For that reason, I never use the desktop browser extensions for my password manager. When password managers were starting to become popular there was one that had security issues with the browser integration so I decided to just avoid those entirely. On iOS, I'm more comfortable with the integration so I use it, but I'm wary of it.
brightball 2 days ago [-]
The problem is that the UX with a browser extension is so much better.
tracker1 2 days ago [-]
I also find it far easier to resist accidentally entering credentials in a phishing site... I'm pretty good about checking, but it's something I tend to point out to family and friends to triple check if it doesn't auto suggest the right site.
brightball 2 days ago [-]
Exactly. Same principle of passkeys, Yubikeys and FIDO2. Much harder to phish because the domains have to match.
Barbing 2 days ago [-]
I’m impressed with their feature to add the URL for next time, after manually filling on an unmatched URI. Hairs raised on neck clicking confirm though.
ufmace 2 days ago [-]
Importantly IMO is the extra phishing protection that the UX is really nice if and only if the url matches what's expected. If you end up on a fake url somehow, it's a nice speed bump that it doesn't let you auto-fill to make you think, hold on, something is wrong here.
If you're used to the clunkier workflow of copy-pasting from a separate app, then it's much easier to absent-mindedly repeat it for a not-quite-right url.
QuantumNomad_ 2 days ago [-]
The 1Password mobile and desktop apps have such a nice UX that I’m happy copy pasting from and into it instead of having any of the browser extensions enabled.
I have 1Password configured to require password to unlock once per 24 hours. Rest of the time I have it running in the background or unlock it with TouchID (on the MacBook Pro) or FaceID (on the iPhone).
It also helps that I don’t really sign into a ton of services all the time. Mostly I log into HN, and GitHub, and a couple of others. A lot of my usage of 1Password is also centered around other kinds of passwords, like passwords that I use to protect some SSH keys, and passwords for the disk encryption of external hard drives, etc.
embedding-shape 2 days ago [-]
> The 1Password mobile and desktop apps have such a nice UX that I’m happy copy pasting from and into it instead of having any of the browser extensions enabled.
Also a great way of missing out on one of the best protections of password managers; completely eliminating phishing even without requiring thinking. And yes, still requires you to avoid manually copy-pasting without thinking when it doesn't work, but so much better than the current approach you're taking, which basically offers 0 protection against phishing.
yborg 2 days ago [-]
My approach is that for critical sites like banking, I use the site URL stored in the password manager too, I don't navigate via any link clicking. I personally am fine with thinking when my entire net worth is potentially at stake.
embedding-shape 2 days ago [-]
It's not only about how you get there, but that the autofill shows/doesn't show, which is the true indicator (beyond the URL) if you're in the right place or not.
Rouge browser extensions for example could redirect you away from the bank website (if the bank website has poor security) when you go there, so even if you use the URL from the password manager, if you don't use the autofill feature, you can still get phished. And if the autofill doesn't show, and you mindlessly copy-paste, you'd still get phished. It's really the autofill that protects you here, not the URL in the password manager.
QuantumNomad_ 2 days ago [-]
If you have rogue browser extensions installed, the browser extension can surely read the values that got filled into the login page without having to redirect to another site.
embedding-shape 2 days ago [-]
Not necessarily, a user could have accepted a permission request for some (legit) redirect extension that never asked for content permission, then when the rogue actor takes over, they want to compromise users and not change the already accepted permissions.
Concretely, I think for redirect browser extension users I'd use "webRequest" permission, while for in page access you'd need a content-script for specific pages, so in practice they differ in what the extension gets access to.
akimbostrawman 1 days ago [-]
You don't need a autofill for a indicator. Simply bookmark your banks login page, even if it gets silently redirected later you will notice as the page wont be bookmarked anymore.
QuantumNomad_ 2 days ago [-]
In Safari on iOS I have all the main pages I use as favourites, so that they show on the home screen of Safari.
Likewise I have links in the bookmarks bar on desktop.
I use these links to navigate to the main sites I use. And log in from there.
I don’t really need to think that way either.
But I agree that eliminating the possibility all-together is a nice benefit of using the browser integration, that I am missing out on by not using it.
embedding-shape 2 days ago [-]
Which works great until tags.tiqcdn.com, insuit.net or widget-mediator.zopim.com (example 3rd party domains loaded when you enter the landing page from some local banks) get compromised. I guess it's less likely to happen with the bigger banks, my main bank doesn't seem to load any scripts from 3rd party as an counter-example. Still, rouge browser extensions still scare me, although I only have like three installed.
tredre3 2 days ago [-]
> The problem is that the UX with a browser extension is so much better.
It's better, but calling it so much better [that it's unreasonable to forgo the browser extension] is a bit silly to me.
1. Go to website login page
2. trigger the global shortcut that will invoke your password manager
3. Your password manager will appear with the correct entry usually preselected, if not type 3 letters of the site's name.
4. Press enter to perform the auto type sequence.
There, an entire class of exploits entirely avoided. No more injecting third party JS in all pages. No more keeping an listening socket in your password manager, ready to give away all your secrets.
The tradeoff? You now have to manually press ctrl+shift+space or whatever instead when you need to log in.
Ritewut 2 days ago [-]
The tradeoff is that you need to know how to setup a global shortcut or even know it's even possible. I wish people would stop minimizing the knowledge they have as something everyone just knows.
dwedge 2 days ago [-]
How do you set up this shortcut? I'd prefer to get rid of extensions, if for no better reason than sometimes it switches to my work profile and I have to re-login
lern_too_spel 2 days ago [-]
Also, you want to avoid exposing your passwords through the clipboard as much as possible.
archargelod 2 days ago [-]
On unix-like OSes you can use `xsel` and configure it to clear clipboard after a single paste and/or after a set period of time.
flossly 2 days ago [-]
On iOS I feel I have less control over what's running than on Linux (dont get me started on Windows or Android). So that's the order of how I dare to use it. But a supply chain attack: I'll always use a distributed program: the only thing I can do is only use old versions, and trusted distribution channels.
WhyNotHugo 2 days ago [-]
In theory the browser integration shouldn’t leak anything beyond the credentials being used, even if compromised.
When you use autofill, the native application will prompt to disclose credentials to the extension. At that point, only those credentials go over the wire. Others remain inaccessible to the extension.
2 days ago [-]
uyzstvqs 2 days ago [-]
We need cooldowns everywhere, by default. Development package managers, OS package managers, browser extensions. Even auto-updates in standalone apps should implement it. Give companies like Socket time to detect malicious updates. They're good at it, but it's pointless if everyone keeps downloading packages just minutes after they're published.
eranation 2 days ago [-]
Exactly this. For anyone who wants to do it for various package managers:
This would have protected the 334 people who downloaded @bitwarden/cli 2026.4.0 ~19h ago (according to https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi...). Same for axios last month (removed in ~3h). Doesn't help with event-stream-style long-dormant attacks but those are rarer.
(plug: released a small CLI to auto-configure these — https://depsguard.com — I tried to find something that will help non developers quickly apply recommended settings, and couldn't find one)
X is the worst place to hold community discussions.
tomjen3 2 days ago [-]
I am not sure that works - imagine that the next shellshock had been found. Would you want to wait 7 days to update?
We need to either screen everybody or cut of countries like North Korea and Iran from the Internet.
tadfisher 2 days ago [-]
These vulnerabilities are all caught by scanners and the packages are taken down 2-3 hours after going live. Nothing needs to take 7 days, that's just a recommendation. But maybe all packages should be scanned, which apparently only takes a couple of hours, before going live to users?
AgentME 2 days ago [-]
Shellshock was in 2014 and Log4Shell was 2021. It's far more likely that you're going to get pwned by using a too-recent unreviewed malicious package than to be unknowingly missing a security update that keeps you vulnerable to easy RCEs. And if such a big RCE vuln happens again, you're likely to hear about it and you can whitelist the update.
sph 2 days ago [-]
> What can I do to prevent it?
My two most precious digital possessions - my email and my Bitwarden account - are protected by a Yubikey that's always on my person (and another in another geographical location). I highly recommend such a setup, and it's not that much effort (I just keep my Yubikey with my house keys)
I got a bit scared reading the title, but I'm doing all I can to be reasonably secure without devolving into paranoia.
ThePowerOfFuet 2 days ago [-]
If the software gets poisoned then your YubiKey will not save you.
hgoel 2 days ago [-]
I think they mean to secure your most valuable accounts with a hardware token rather than in a normal password manager, so they aren't at risk if your password manager has an issue.
streb-lo 2 days ago [-]
Use the desktop or web vault directly, don't use the browser plugin.
flossly 2 days ago [-]
How are they clearly less susceptible to a supply chain attack?
Maybe the web vault, but then we do not know when it's compromised (that's the whole idea); so we trust them not to've made a mess...
(disclaimer: I maintain the 2nd one, if I knew of the first, I wouldn't have released it, just didn't find something at that time, they do pretty much the same thing, mine in a bit of an overkill by using rust...)
aftbit 2 days ago [-]
Do either of those work on browser extensions that I install as a user? I don't see anything relating to extensions in there.
eranation 23 hours ago [-]
Nope but that’s a good idea
ffsm8 2 days ago [-]
You should use hunter2 as your password on all services.
That password cannot be cracked because it will always display as ** for anyone else.
My password is *****. See? It shows as asterisks so it's totally safe to share. Try it!
... Scnr •́ ‿ , •̀
wing-_-nuts 2 days ago [-]
ah, the old bash.org.
darkwater 2 days ago [-]
> Russian locale kill switch: Exits silently if system locale begins with "ru", checking Intl.DateTimeFormat().resolvedOptions().locale and environment variables LC_ALL, LC_MESSAGES, LANGUAGE, and LANG
So bold and so cowards at the same time...
NewsaHackO 2 days ago [-]
The worst thing is that you can't even tell if that's "real" or just a false flag.
embedding-shape 2 days ago [-]
Does it matter? Lots of groups do such checks at startup at this point, because every news outlet who reports on it suddenly believe the group to be Russian if you do, so it's a no brainer to add today to misdirect even a little.
NewsaHackO 2 days ago [-]
My point is that it could still be Russia, as they know that we know it is used as a false flag.
embedding-shape 2 days ago [-]
My point is; what changes if we knew for a fact it was Russia or that it was someone else?
NewsaHackO 2 days ago [-]
>My point is; what changes if we knew for a fact it was Russia or that it was someone else?
Is this a serious question?
yonatan8070 2 days ago [-]
Sounds serious to me
It's highly unlikely that the people behind an attack like this would come out (non-anonimously) and take credit. And it's unlikely they'll be caught. So does it matter to most peoplee if it's Russians, Americans, Iranians, North Koreans, or some other country?
If you're a 3-letter agency, you'd want to know and potentially arrest them, but as a random guy on the internet, or even a maintainer, I really don't think it matters.
NewsaHackO 2 days ago [-]
So if it came out that the NSA was attempting to put backdoors in consumer password managers, it wouldn't change the context of the side channel attack? How about if it was a company (like Google)? It seemed like an unserious question because I can't understand how someone would think something like that wouldn't change the situation.
aucisson_masque 2 days ago [-]
Does the nsa really need that ? 99% of our services are hosted on American servers, which the nsa already has full access.
Why would you steal the key when you're already in the house ?
And for the high profile, like some Iranian scientist who has the code to something important, they wouldn't use things like bitwarden.
I really see no use case when the nsa would need access to your bitwarden vault.
embedding-shape 2 days ago [-]
> So if it came out that the NSA was attempting to put backdoors in consumer password managers, it wouldn't change the context of the side channel attack?
Not really, we already know that NSA attempts shit like this all the time, if that came out, it'd be the same as the Snowden leaks meaning, a bunch of nerds going "Huh, who could have predicted this?". I don't see the point in it being Russia, China or the US, I'd like it as much if the US did it as Russia, so that's why I asked why it matters.
john_strinlai 2 days ago [-]
for most people, nothing.
for threat intel people, a lot.
oneshtein 1 days ago [-]
If walks like a duck and quacks like a duck, then it is a russian spy masqueraded like a duck. Russia is at cold war with NATO.
bell-cot 2 days ago [-]
"Discretion is the better part of valor", "Never point it at your own feet", "Russian roulette is best enjoyed as a spectator", and many other sayings seem applicable.
testfrequency 2 days ago [-]
Smells like blackmail from another nation..
iririririr 2 days ago [-]
ah yes, because everyone sets locale on their npm publish github CI job.
obvious misdirection, but it does serve to make it very obvious it was a state actor.
embedding-shape 2 days ago [-]
> but it does serve to make it very obvious it was a state actor
Lol no, lots of groups do this, non-state ones too.
hypeatei 2 days ago [-]
That isn't a smoking gun. I think it was the Vault7 leaks which showed that the NSA and CIA deliberately leave trails like this to obfuscate which nation state did it. I'm sure other state actors do this as well, and it's not a particularly "crazy" technique.
oneshtein 1 days ago [-]
So, Russia is no longer a target for CIA?
hypeatei 24 hours ago [-]
What? All I'm saying is that attribution isn't easy to do in these cases.
mobeigi 2 days ago [-]
KeePass users continue to live the stress free live.
I've managed to avoid several security breaches in last 5 years alone by using KeePass locally on my own infra.
gbalduzzi 2 days ago [-]
I don't understand how this solves the issue in this case.
Bitwarden vaults were not compromised, there was a problem in a tool you used to access the secrets.
What makes it impossible for KeePass access tools to have these issues?
john_strinlai 2 days ago [-]
>What makes it impossible for KeePass access tools to have these issues?
the superiority of keepass users scares away the bad actors
prmoustache 2 days ago [-]
> I don't understand how this solves the issue in this case.
I'd say since it is a local only tool, you don't really need to update it constantly provided you are a sane person that don't use a browser extension. It makes it easier to audit and yourself less at risk of having your tool compromised.
It doesn't have to be keypass though, it can be any local password management tool like pass[1] or its guis or simply a local encrypted file.
KeepassXC can also be configured to allow / deny when a browser extension requests a password.
nathanmills 2 days ago [-]
Why are browser extensions not sane in your opinion?
akimbostrawman 1 days ago [-]
Browser password manager extensions are like putting a dog door on your reinforced vault door. Giant increase in attack surface.
neobrain 1 days ago [-]
Quite the contrary, actually: not using a browser extension makes you much more susceptible to phishing attacks, since your password manager won't be able to protect you from copy-pasting credentials into an imposter website.
Capricorn2481 22 hours ago [-]
Well we're in a thread about the CLI being compromised. I've never heard of a sandboxed browser extension being compromised.
d3Xt3r 2 days ago [-]
It's not impossible, but most KeePass tools are written in sane languages and built with sane tooling, and don't use trash like Javascript and npm. Of course I'm not considering browser extensions or exclusive web-clients, but the main KeePass client has a good autotype system, so you don't really need to use the browser extension.
In any case, the fact that the official BitWarden client (which uses Electron btw) and even the CLI is written in Javascript/Typescript - should tell you everything you need to know about their coding expertise and security posture.
lousken 2 days ago [-]
Fully agree, I can't wait for the day when developers finally stop using javascript for shit it was never designed for. .NET is decades ahead at this point.
1024kb 2 days ago [-]
I need my passwords to be accessible from my infrastructure and my phone. How do you achieve this with KeePass? I assumed it was not possible, but in fairness, I haven't really gone down that rabbit hole to investigate.
worble 2 days ago [-]
Keepass is just a single file, you can share it between devices however you want (google drive, onedrive, dropbox, nextcloud, syncthing, rsync, ftp, etc); as long as you can read and write to it, it just works. There are keepass clients for just about everything (keepassxc for desktops, keepass2android or keepassdx for android, keepassium for iphone).
aborsy 2 days ago [-]
How is the quality of browser extensions compared to Bitwarden?
worble 2 days ago [-]
I don't have any points of comparison since I've never used Bitwarden, but it works well enough for my purposes. It'll match the url, offer to autofill (sometimes those multiflow sites like Microsoft will trip it up, but you can always just right click -> enter username/password for a site and that'll work), and it does TOTP filling too.
prmoustache 2 days ago [-]
You don't use a browser extension if you are serious about security anyway.
TheDong 1 days ago [-]
You do use the browser extension because it's a strong anti-phishing defense.
If someone links me to "rnicrosoft.com" with a perfectly cloned login page, my eyes might not notice that it's a phishing link, but my browser extension will refuse to autofill, and that will cause me to notice.
Phishing is one of the most common attacks, and also one of the easiest to fall for, so I think using the browser extension is on-net more secure even though it does increase your attack surface some.
I know proper 2fa, like webauthn/fido/yubikeys, also solves this (though totp 2fa does not), but a lot of the sites I use do not support a security key. If all my sites supported webauthn, I think avoiding the browser extension would be defensible.
prmoustache 1 days ago [-]
Not having an account for every single damn website + only login from websites you actually entered without following a link goes a long way to avoid that.
Sure there may be existence of typosquatting here and there but they tend to be much easier to spot vs the phising url using unicode variants.
nextlevelwizard 1 days ago [-]
I guess I better just use same password everywhere then…
gck1 2 days ago [-]
How do you autofill from your db then?
prmoustache 2 days ago [-]
I don't autofill. It may be less user friendly but it is not that big of a deal.
nathanmills 2 days ago [-]
I don't save browser cookies for obvious privacy reasons and it's absolutely a big deal to not need to pull up some program and copy paste my login details constantly for every site.
prmoustache 1 days ago [-]
I try to limit my account creation to the minimum. HN is one of the few, for the better or for the worse as sometimes I just think I should nuke it and stop wasting time commenting.
eipi10_hn 1 days ago [-]
I usually just use another profile for the stuff that I clear cookies when closing the profile. The other profiles I just use for a limited of sites that need logging in, each site is in its own container and I don't browse other sites on those profiles.
If I ever need to fill the login, I just do any of these:
- KeepassXC has auto-type feature, so I just choose the needed one and let it auto-type
- I enable the extension only when I need to log in and choose the one I need to fill (not auto-fill, but only fill when I click on the account from the extension pop-up dashboard).
FreePalestine1 1 days ago [-]
That is the problem, syncing isn't the most trivial problem especially for non technical folks. User experience is far superior in a fully managed solution.
yolo_420 2 days ago [-]
Not op but I mean you can use a public cloud with Cryptomator on top if you don’t trust your password DB on a non E2E cloud. Or you can just use your own cloud (but then no access outside or can risk and open up infra), and then any of the well known clients on your phone. Can optionally sandbox them if possible and then just be mindful of sync conflicts with the DB file but I assume you, like most people, will 99.9% of the time be reading the DB not writing to it.
kay_o 2 days ago [-]
Avoid Onedrive btw - it thinks encrypted files are ransomware; previous use resulted in nonstop ransomware warnings after cryptomator use
piperswe 2 days ago [-]
Syncthing can synchronize Keepass files between devices quite well.
jasonjayr 2 days ago [-]
I rely on this too, but counting down the days android no longer lets syncthing touch another app's files :(
antiframe 2 days ago [-]
I never enjoyed the Android syncthing experience, so I just plug my phone in once a month and manually copy the vault over. I don't ever edit on my phone, so I don't need two-way syncing.
piperswe 2 days ago [-]
It would be strange if Android locked that down further than even iOS - Keepassium on iOS can open files from any sync app IIRC
alcazar 2 days ago [-]
What happens if you add a new item on two devices simultaneously?
63stack 2 days ago [-]
It renames one of them to $hostname_conflicted, or something like that.
Keepass has a built in tool for reconciling two databases, you can use that in this scenario.
prmoustache 2 days ago [-]
Why would you do that?
By the way, syncthing can manage conflicts by keeping one copy of the file with a specific name and date. You can also decide is one host is the source of truth.
mrWiz 2 days ago [-]
I use MacOS and iOS for home home devices and Windows for work, and use Strongbox on the Apple side with KeePassXC on the Windows side and sync them using DropBox.
thepill 2 days ago [-]
For me it is nextcloud + wireguard
SV_BubbleTime 2 days ago [-]
Someone is about hop on and tell you how they simply run a Dropbox/GDrive to host their keepass vault and how that’s good enough for me (which should be Keepass’s tagline) and mobile they use a copy or some other manually derived and dependency ridden setup. They will support ad hoc over designed because their choice of ad hoc cloud is better than a service you use.
Ukv 1 days ago [-]
> and how that's good enough for me
I'd go further than that and say for me personally, the fact it's just a file is a selling point, not a "good enough" concession. I can just put passwords.kdbx alongside my notes.txt and other files (originally on a thumbdrive, now on my FTP server) - no additional setup required.
There will be people who use multiple devices but don't already have a good way to access files across them, but even then I'm not fully convinced that SaaS specifically for syncing [notes/passwords/photos/...] really is the most convenient option for them opposed to just being a well-marketed local maximum. Easy to add one more subscription, easy to suck it up when terms changes forbid you syncing your laptop, easy to pray you're not affected by recurring breaches, ... but I'd suspect often (not always) adds up to more hassle overall.
xienze 2 days ago [-]
I use self-hosted Bitwarden (Vaultwarden) for this. It runs on my local network, and I have it installed on my phone etc. When I’m on my local network, everything works fine. When I’m not on my local network, the phone still has the credentials from the last time it was synced (i.e., last time it was used while the phone was on the home network). It’s a pretty painless way to keep things in sync without ever allowing Bitwarden to be accessible outside my home network.
Matl 2 days ago [-]
I mean there are ways i.e. if you run something like tailscale and can always access your private network etc. but it is a hassle.
Plus, now you're responsible for everything. Backups, auditing etc.
walrus01 2 days ago [-]
In short, when I make a major password or credential change I do it from my laptop, consider that file on disk to be the "master" copy, and then manually sync the file on a periodic basis to my phone. I treat the file on the phone as read-only. Works fine so far.
To date there have been zero instances when I needed to significantly change a password/service/login/credential solely from my phone and I was unable to access my laptop.
Additionally the file gets synchronized to a workstation that sits in my home office accessible by personal VPN, where it can be accessed in a shell session with the keepass CLI: https://tracker.debian.org/pkg/kpcli
You can use an extremely wide variety of your own choice of secure methods for how to get the file from the primary workstation (desktop/laptop) to your phone.
afavour 2 days ago [-]
Which is great for Hacker News users that can maintain their own infra. But if we're talking "stress free", that's not an answer for the average user...
kelvinjps10 2 days ago [-]
what "infra"? keepass works locally, and just opens a database file. it works the same as any other password manager.
afavour 2 days ago [-]
Most other password managers have a cloud component so if your local storage breaks or gets lost you don't lose all your passwords.
NoMoreNicksLeft 2 days ago [-]
The average user is reusing their password everywhere, and rotation means changing the numeral 6 at the end of the password to 7.
NegativeK 2 days ago [-]
We should be encouraging those users to switch to a password manager.
NoMoreNicksLeft 2 days ago [-]
I do when I can, but there's a learning curve, and the rest of the world is trying to move those users in a very different direction (passkeys and other bullshit).
Password habits for many people are now decades-old, and very difficult to break.
Perz1val 2 days ago [-]
Ok, single file, blah, blah. Realistically how do you sync that and how do you resolve conflicts? What happens if two devices add a password while offline, then go online?
eipi10_hn 2 days ago [-]
I actually was a Bitwarden user at first, but over time in reality the frequency that I change email/password is not that much. It's not like I change those things every hour or every day like with my work files/documents and need constant syncing to the drive. And the chance that I add/change passwords at 2 devices at a close time is even less.
So gradually I don't feel I need syncing that much any more and switched to Keepass. I made my mind that I'll only change the database from my computer and rclone push that to any cloud I like (I'm using Koofr for that since it's friendly to rclone) then in any other devices I'll just rclone pull them after that when needed. If I change something in other devices (like phones), I'll just note locally there and change the database later.
But ofc if someone needs to change their data/password frequently then Bitwarden is clearly the better choice.
kelvinjps10 2 days ago [-]
the only thing I can't find to do with keepass is how back up it in the cloud, like if you encrypt your back up, then where do you save that password, then where do you save the password for the cloud provider?.
hootz 2 days ago [-]
You save the single password in your head. All other passwords go inside Keepass.
eipi10_hn 2 days ago [-]
Same as Bitwarden? You just need to remember Keepass password, just like remember Bitwarden password.
pregnenolone 2 days ago [-]
> KeePass users continue to live the stress free live.
This article is borderline malicious in how it skirts the facts.
This wasn't a case where KeePass was compromised in any way, as far as I can tell. This appears to be a basic case of a threat actor distributing a trojanized version via malicious ads. If users made sure they are getting the correct version, they were never in danger. That's not to say that a supply chain attack couldn't affect KeePass, but this article doesn't say that it has.
dspillett 2 days ago [-]
That looks like you'd have to download and run a hacked installer that was never avaliable from an official location. That is a much lower risk than a supply-chain attack where anyone building birwarden-cli from the official repo would be infected via the compromised dependency.
Long term keepass users aren't going to be affected. If you mention software to others make sure you send them a link to a known safe download location instead of having them search for one (as new users searching like that are more at risk of stumbling on a malicious copy of the official site hosting a hacked version).
derkades 2 days ago [-]
This AI generated article is not about vulnerabilities in KeePass, rather about malicious KeePass clones.
baby_souffle 2 days ago [-]
Happy 1password user for more than a decade.
It's only a matter of time until _they_ are also popped :(.
jaxefayo 2 days ago [-]
I think most people use keepassxc, not original keepass.
hypeatei 2 days ago [-]
That's an AI slop article. I'm not sure how someone creating their own installer and buying a few domains to distribute it is a mark against KeePass itself.
> The beacon established command and control over HTTPS
hrimfaxi 2 days ago [-]
> The affected package version appears to be @bitwarden/cli2026.4.0, and the malicious code was published in bw1.js, a file included in the package contents. The attack appears to have leveraged a compromised GitHub Action in Bitwarden’s CI/CD pipeline, consistent with the pattern seen across other affected repositories in this campaign.
erans 2 days ago [-]
The part that seems most important here is that npm install was enough.
Once the compromise point is preinstall, the usual "inspect after install" mindset breaks down. By then the payload has already had a chance to run.
That gets more interesting with agents / CI / ephemeral sandboxes, because short exposure windows are still enough when installs happen automatically and repeatedly.
Another thing I think is worth paying attention to: this payload did not just target secrets, it also targeted AI tooling config, and there is a real possibility that shell-profile tampering becomes a way to poison what the next coding assistant reads into context.
I work on AgentSH (https://www.agentsh.org), and we wrote up a longer take on that angle here:
Nobody inspects packages after install, your theory has been debunked multiple times, caring about npm install running scripts is moot when you’ll inevitably run the actual binary after install.
And besides, you could always pull the package and inspect before running install, which unless you really know the installer and understand/know guarantees deeply (e.g., whether it’s possible for an install to deploy files outside of node_modules) it’s insane to even vaguely trust it to pull and unpack potentially malicious code.
lxgr 2 days ago [-]
What's particularly impressive about this attack is that the attackers must have precisely coordinated it with Github not being down.
I made a scanner(ActionPin) for the workflow patterns this compromise exposed.
ActionPin — a GitHub Actions hardening checker that flags unpinned third-party actions, overbroad workflow permissions, install scripts that touch secrets, and agent-triggered jobs that can reach production credentials.
ActionPin host on github.
latexr 20 hours ago [-]
> attackers abused a GitHub Action in Bitwarden’s CI/CD pipeline.
I don’t even trust GitHub’s own actions. I used to use only the one to checkout a repository, limited to a specific tag, but then realised that even a tagged version, if it has dependencies which are not themselves tagged, could be compromised, so I stopped and now do the checkout myself. It’s not even that many lines of code, the fact GitHub has a huge npm package with dependencies to do something this basic is insane.
GaryBluto 2 days ago [-]
To use a fitting turn of phrase, "Many such cases."
How many times will this happen before people realise that updating blind is a poor decision?
wooptoo 2 days ago [-]
This is precisely why I don't use BW CLI. Use pass or gopass for all your CLI tokens and sync them via a private git repo.
Keep the password manager as a separate desktop app and turn off auto update.
SV_BubbleTime 2 days ago [-]
A supply chain issue that hadn’t happened to BW CLI before is exactly why you use other CLIs that seem to be identically vulnerable to the same issues?
gnfurlong 2 days ago [-]
That's just not true.
The original pass is just a single shell script. It's short, pretty easy to read and likely in part because it's so simple, it's also very stable. The only real dependencies are bash, gnupg and optionally git (history/replication). These are most likely already on your machine and whatever channel you're getting them from (ex: distribution package manager) should be much more resilient to supply chain vulnerabilities.
It can also be used with a pgp smartcard (in my case a Yubikey) so all encryption/decryption happens on the smartcard. Every attempt to decrypt a credential requires a physical button press of the yubikey, making it pretty obvious if some malware is trying to dump the contents of the password store.
isatty 2 days ago [-]
Writing a cli with JavaScript? No thank you.
zie 2 days ago [-]
It's typescript and pretty sure all of the Official Bitwarden clients are written in it.
I wrote a version in Python and then rust back before the official CLI was released. Now you can use https://github.com/doy/rbw instead, much better maintained (since I don't use Bitwarden anymore).
npodbielski 2 days ago [-]
What do you use?
zie 2 days ago [-]
I have family I need to support, so I use 1password. It also helped that work gives me a 1P family plan free.
The practical differences to me:
* 1P is aimed at non-tech users more than Bitwarden.
* 1P lets you easily store things other than just passwords (serial #'s, license info, SSN's, etc) You can in Bitwarden, but it's a little annoying.
* 1P lets you store SSH keys(by effectively being an ssh-agent): https://developer.1password.com/docs/ssh/
All that said, I still happily recommend BW, especially for people that are cost-conscious, the free BW plan is Good Enough for most everyone.
Security wise, they are equivalent enough to not matter.
Narrower blast radius than the 2022 LastPass breach, at least the vaults weren't touched.
gcolella 1 days ago [-]
Supply chain attacks via package managers are exactly the nightmare
scenario. A few months ago I had a production issue where a composer
dependency got silently nuked from our vendor/ — the package was
setasign/fpdf. Before restoring it, my first instinct was "did someone
compromise the repo?". Turned out to be local, but the 10 minutes
between discovery and confirmation were terrifying. Now we pin every
dependency by hash in composer.lock and review any change in it before
deployment. Still not enough — if the registry itself is compromised,
the hash pin saves you only from drive-by tampering, not from
poisoned-at-origin uploads. Feels like we need something like
Sigstore-level attestation for PHP/npm at minimum.
hgoel 2 days ago [-]
Does the CLI auto-update?
Edit: The CLI itself apparently does not, which will have limited the damage a bit, but if it's installed as a snap, it might. Incidents like this should hopefully cause a rollback of this dumb system of forcefully and frequently updating people's software without explicit consent.
I think you had to have installed the CLI during that time-frame, then ran the brand new installed CLI to be vulnerable.
Assuming you had it already installed, you would be safe.
iso1631 1 days ago [-]
I checked a machine this morning and it had updated itself at Apr 23 1715G
I've purged the snap. Really should purge snapd completely.
traderj0e 22 hours ago [-]
Shouldn't a secret manager company be stricter about what third-party code they use anywhere? I don't think this kind of thing flies at Google or Apple.
qux_ca 2 days ago [-]
FYI, Raycast users, the bitwarden-cli version used with the bundled bitwarden extension is 2026-03-01, not the compromised one (2026-04-01).
I am glad I consciously decided not to put 2FA keys when I adopted bitwarden back in 2021, and manage them with Aegis. It was a bit of a hassle to setup backups, but it's good to split your points of failure.
post-it 2 days ago [-]
I've dramatically decreased my reliance on third-party packages and tools in my workflow. I switched from Bitwarden to Apple Passwords a few months ago, despite its worse feature set (though the impetus was Bitwarden crashing on login on my new iPad).
I've also been preferring to roll things on my own in my side projects rather than pulling a package. I'll still use big, standalone libraries, but no more third-party shims over an API, I'll just vibe code the shim myself. If I'm going to be using vibe code either way, better it be mine than someone else's.
pixel_popping 2 days ago [-]
Why not stick to simple/heavily vetted password managers (like keepassx)? is there some advanced feature you use?
post-it 2 days ago [-]
I hope you're not using KeePassX, it's been unmaintained for years. KeePassXC is only available for Linux, which means I'd need to use a third-party app for Mac and iOS, so I'd be trusting three vendors instead of one.
Aside from passwords, I store passkeys, secure notes, and MFA tokens.
22 hours ago [-]
pixel_popping 2 days ago [-]
KeePassXC is cross-platform, unsure about iOS.
post-it 2 days ago [-]
Sorry, you're right, I missed the tabs on top. No iOS support though.
traderj0e 22 hours ago [-]
This comment chain illustrates a point, people don't want to use a password manager where there's a question of which fork is even supported
Vvector 2 days ago [-]
Seamless syncing is the primary reason I stick with BW
traderj0e 22 hours ago [-]
Yeah I've just used Apple's Keychain continuously since 2004, idk what the big deal is with these other things unless you need it to be cross-platform.
sega_sai 2 days ago [-]
So how likely is that these compromises will start affecting the non-cli and non-open-source tools ? For example other password managers (in the form of GUI's or browser extensions).
I recently had to disable their Chrome extension because it made the browser grind to a halt (spammed mojo IPC messages to the main thread according to a profiler). I wasn't the only one affected, going by the recent extension reviews. I wonder if it's related.
bstsb 2 days ago [-]
> CLI builds were affected [...]
> Bitwarden’s Chrome extension, MCP server, and other legitimate distributions have not been affected yet.
This will continue to happen more and more, until legislation is passed to require a software building code.
nothinkjustai 2 days ago [-]
Remember how the White House published that document on memory safe languages? I think it’s time they go one step further and ban new development in JavaScript. Horrible language horrible ecosystem and horrible vulns.
hootz 2 days ago [-]
Supply chain attacks aren't exclusive to JS just like malware isn't exclusive to Windows, it's just that JS/Windows is more popular and widespread. Kill JS and you will get supply chain attacks on the next most popular language with package managers. Kill Windows and you will get a flood of Linux/MacOS malware.
mghackerlady 2 days ago [-]
Maybe language based package managers aren't great. Also, npm has design decisions that make it especially prone to supply chain attacks iirc
dnnddidiej 2 days ago [-]
JS apps need more direct dependencies and transitives to do basic things vs. other languages.
DiffTheEnder 2 days ago [-]
I wonder if 1Password CLI is a top priority for hackers similarly.
y0ssar1an 2 days ago [-]
i'm sure it is, but it's written in Rust so it should be a little harder to pwn
You had to install the CLI through NPM at a very short time frame for it to be affected. If you did get infected, you have to assume all secrets on your computer were accessed and that any executable file you had write access to may be backdoored.
valicord 2 days ago [-]
No it doesn't?
ErneX 2 days ago [-]
Yes it does, under technical analysis. I don’t want to paste it here when it’s laid out in the article…
hgoel 2 days ago [-]
It seems to be describing what the Checkmarx vulnerability allows to be done on a GitHub Actions runner?
From my understanding the checkmarx attack could have been prevented by the asfaload project I'm working on. See https://github.com/asfaload/asfaload
It is:
- open source
- accountless(keys are identity)
- using a public git backend making it easily auditable
- easy to self host, meaning you can easily deploy it internally
- multisig, meaning event if GitHub account is breached, malevolent artifacts can be detected
- validating a download transparantly to the user, which only requires the download url, contrary to sigstore
archargelod 2 days ago [-]
That's why I don't use any third-party password managers. You have to trust them not to fuck up security, updates, backups, etc. etc.
I wrote my own password generator - it's stateless, which has the advantage that I never have to back up or sync any data between devices. It just lets you enter a very long, secure master password, service name and a username then runs an scrypt hash on this with good enough parameters to make brute-force attacks unfeasible.
For anything important, I also use 2FA.
nozzlegear 2 days ago [-]
Another day, another supply chain attack involving GitHub Actions.
adityamwagh 2 days ago [-]
GitHub was down too! Its uptime has been so bad recently.
righthand 2 days ago [-]
It’s the new Npm
saghm 1 days ago [-]
This one also involved npm to be fair
palata 2 days ago [-]
Don't GitHub Actions actually use npm?
dnnddidiej 2 days ago [-]
The new Windows 98
y0ssar1an 2 days ago [-]
they were cooked the minute they chose to write it in typescript
fnoef 2 days ago [-]
I mean, what's the future now? Everyone just vibecoding their own private tools that no "foreign government" has access to? It honestly feels like everything is slowly starting to collapse.
Also didn't Microsoft (the owner of GitHub) got access to Claude Mythos in order to "seCuRe cRitiCal SoftWaRe InfRasTructUre FoR teh AI eRa"? Hows securing GitHub Action going for them?
giantfrog 2 days ago [-]
How the hell are most people supposed to balance the risk of not updating software against the risk of updating software?
eranation 2 days ago [-]
It's a hard decision, I would say a cooldown by default in the last few months would have prevented more attacks than not upgrading to the latest version due to an immediate RCE, zero-click, EPSS 100%, CVSS 10.0, KEV mentioned Zero Day CVE. But now that the Mythos 90 days disclosure window gets closer, I don't know what tsunami of urgent patches is in our way... it's not an easy problem to solve.
I lean toward cooldown by default, and bypass it when an actual reachable exploitable ZeroDay CVE is released.
progval 2 days ago [-]
Use a package repository that fast-tracks security updates, like Debian Stable.
"a malicious package that was briefly distributed"
"investigation found no evidence that end user vault data was accessed or at risk"
"The issue affected the npm distribution mechanism for the CLI during that limited window, not the integrity of the legitimate Bitwarden CLI codebase or stored vault data."
"Users who did not download the package from npm during that window were not affected."
Downplaying so hard it's disgusting. Bitwarden failed and became a vector of attack. A vendor who is responsible for all my passwords. What a joke. All trust lost: by the incident and comms-style. Time to move before they make an even bigger mistake.
righthand 2 days ago [-]
Dont write clis in Javascript.
fraywing 2 days ago [-]
Can we please get a break?
Praying to the security gods.
It seems like we've have non-stop supply chain attacks for months now?
dgellow 2 days ago [-]
Expect to continue for years to come
ripped_britches 2 days ago [-]
This is the break right now, we will smile back on these times
dnnddidiej 2 days ago [-]
Stock up on pencils and paper guys.
saidnooneever 1 days ago [-]
some coffee apps will be malicious now with 'melange' as IoC haha.. and Navigator xD... but i guess netscape is kinda malware o.O.
on a more serious note. i told you so levels reaching new heights. dont use password managers. dont handoff this type of risk to a third party.
its like putting all your keys in a flimsy lockbox outside of your appartment. at some point someone will trip over it, find the keys and explore -_-.
it being impractical with the amount of keys/passwords you need to juggle?
not an excuse. problem should and can be solved differently.
nh43215rgb 1 days ago [-]
> THE MOST TRUSTED PASSWORD MANAGER
> Defend against hackers and data breaches
> Fix at-risk passwords and stay safe online with Bitwarden, the best password manager for securely managing and sharing sensitive information.
yep. literally from their website this moment..and the link to their "statement"[0] is nowhere on the front page.
Oh wait, there is a top banner..."Take insights to action: Bitwarden Access Intelligence now available Learn more >" nope.
Once again, it is in the NPM ecosystem. OneCLI [0] does not save you either. Happens less with languages that have better standard libraries such as Go.
If you see any package that has hundreds of libraries, that increases the risk of a supply chain attack.
A password manager absolutely does need a CLI tool??
traderj0e 21 hours ago [-]
I don't think Go's standard library has the functionality of the JS lib that was infected here. The Axios thing was a fair criticism of JS because Axios should never have been needed in the first place.
hrimfaxi 2 days ago [-]
> A password manager does not need a CLI tool.
Why not? Even macos keychain supports cli.
gear54rus 2 days ago [-]
The above comment is just a bunch of generalizations not meant to address seriously that's why.
rvz 2 days ago [-]
So the comparison here is that you would rather trust a password manager with a CLI that imports hundreds of third-party dependencies over a first party password manager with a CLI that comes with the OS?
I don't think macOS Keychain uses NPM and it isn't in TypeScript or Javascript and, yes it does not need a CLI either.
The NPM and Java/Typescript ecosystem is part of the problem that encourages developers to import hundreds of third-party libraries, due to its weak standard library which it takes at least ONE transitive dependency to be compromised and it is game over.
hgoel 2 days ago [-]
You initially complained about CLIs, not the dependency mess of the JS ecosystem.
You still have not said why this is an issue of having a CLI.
rvz 2 days ago [-]
> You initially complained about CLIs, not the dependency mess of the JS ecosystem.
I complained about both. What does this say from the start?
>> Once again, it is in the NPM ecosystem.
> You still have not said why this is an issue of having a CLI.
Why do you need one? Automation reasons? OpenClaw? This is an attractive way for an attacker to get ALL your passwords in your vault. The breach itself if run in GitHub Actions would just make it a coveted target to compromise it which makes having one worse not better and for easier exfiltration.
So it makes even more sense for a password manager to not need a CLI at all. This is even before me mentioning the NPM and the Javascript ecosystem.
hgoel 2 days ago [-]
>Why do you need one? Automation reasons? OpenClaw? This is an attractive way for an attacker to get ALL your passwords in your vault.
I need one because I am not always using a graphical interface. What exactly in a GUI do you think makes it harder/less attractive for an attacker?
If the GUI code is compromised in the same way as the CLI, it'll have the same level of access to your vault as soon as you enter your master password, exactly the same as in the CLI.
gear54rus 2 days ago [-]
It does not much matter if it imports 300 or 30 of them, those vulns will land somewhere in those 30 with equal frequency statistically. If you are advocating developing without dependencies at all, then please start (with any language) and show us all how much you actually ship.
JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
rvz 9 hours ago [-]
> It does not much matter if it imports 300 or 30 of them, those vulns will land somewhere in those 30 with equal frequency statistically.
The point is the risk is far higher with more dependencies as I said from the very start. But it happens much more frequently in the NPM ecosystem than in others.
> If you are advocating developing without dependencies at all, then please start (with any language) and show us all how much you actually ship.
The languages in the former (especially Go) encourages you to use the standard library when possible. Javascript / TypeScript does not and encourages you to import more libraries than you need.
> JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
Nope. It is a target because of the necessity for developers to import random packages to solve a problem due to its weak standard library and the convenience that comes with installing them.
You certainly have a Javascript bias towards this issue yourself and there is clearly a problem and you ignoring it just makes it worse.
If it wasn't an issue, we would not be talking about yet another supply chain attack in the NPM ecosystem.
hgoel 2 days ago [-]
I guess anyone/anything using a non-graphical interface should just not use a password manager for some reason?
Not to mention that a graphical application is just as vulnerable to supply chain attacks.
fluidcruft 2 days ago [-]
I seems like we need better standard libraries, but standard libraries turn into tarpits. I sort of like the way python's stdlib works.
trinsic2 2 days ago [-]
Yeah Im going to have to agree with this
imiric 2 days ago [-]
> A password manager does not need a CLI tool.
That's a wild statement. The CLI is just another UI.
The problem in this case is JS and the NPM ecosystem. Go would be an improvement, but complexity is the enemy of security. Something like (pass)age is my preference for storing sensitive data.
Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi... and select "show deprecated versions").
Same story for the malicious axios (@1.14.1 and @0.30.4, removed within ~3h), ua-parser-js (hours), and node-ipc (days). Wouldn't have helped with event-stream (sat for 2+ months), but you can't win them all.
Some examples (hat tip to https://news.ycombinator.com/item?id=47513932):
p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.EDIT: looks like someone else had a similar idea: https://cooldowns.dev
Most of these attacks don't make it into the upstream source, so solutions[1] that build from source get you ~98% of the way there. If you can't get a from-source build vs. pulling directly from the registries, can reduce risk somewhat with a cooldown period.
For the long tail of stuff that makes it into GitHub, you need to do some combination of heuristics on the commits/maintainers and AI-driven analysis of the code change itself. Typically run that and then flag for human review.
[1] Here's the only one I know that builds everything from source: https://www.chainguard.dev/libraries
(Disclaimer: I work there.)
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".
And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.
It's not a lack of care about privacy, the 7 days delay is like a new stage between RC and final release, where you pull for testing but not for production.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.
Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.
A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.
What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.
Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.
Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.
A cooldown gives anyone who does investigate more time to do their work.
Also, check out the VW Diesel scandal.
Needless to say I’m running all my JS tools in a Docker container these days.
isn't it obvious?
it should be obvious.
why isn't obvious?
In the context of TFA, don't rely on third party github actions that you haven't vetted. Most of them aren't needed and you can do the same with a few lines of bash. Which you can also then use locally.
With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).
Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).
These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts
By contrast, a client-side cooldown doesn't require very much ecosystem or index coordination.
This kind of thinking is why I don't trust the security of open source software. Industry standard security practices don't get implemented because no one is being paid to actually care and they are disconnected from the users due to not making income from them.
(With that said, I think it also varies by ecosystem. These days, I think I can reasonably assert that Python has extended significant effort to stay ahead of the curve, in part because the open source community around Python has been so willing to adopt changes to their security posture.)
There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".
Exceptions to quarantine rules just invites attackers to mark malicious updates as security patches.
If every kind of breakage, including security bugs, results in a 2-3 hour wait to ship the fix, maybe that would teach folks to be more careful with their release process. Public software releases really should not be a thing to automate away; there needs to be a human pushing the button, ideally attested with a hardware security key.
TypeScript on its own is a great language, with a very interesting type system. Most other type systems can’t run doom.
https://simonwillison.net/2025/Feb/27/typescript-types-can-r...
That doesn't sound like a compliment.
Note the if you get
then comment out the exclude and runI know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.
I think this is a bad idea, because it means the permissions of any new folders have to be closely guarded, which is easy to forget.
If you're brave you can run whonix.
The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.
Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).
Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.
If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.
Learn how your tools actually build. Build your own containers.
Learn how your tools actually run. Write your own CI templates.
My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.
Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:
I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.An alternative hypothesis: what if 7-day cooldowns incentivize security scanners, researchers, and downstream packagers to race to uncover problems within an 7-day window after each release?
Without some actual evidence, I'm not sure which of these is correct, but I'm pretty sure it's not productive to state either one of these as an accepted fact.
Either way there will be fewer eyes on it.
Many companies exist now whose main product is supply chain vetting and scanning (this article is from one such company). They are usually the ones writing up and sharing articles like this - so the community would more than likely hear about it even if nobody was actually using the package yet.
> This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
You're still pulling a lot of dependencies. At least they're pinned though.
https://lib.rs/crates/rbw
Takes what, maybe 15 seconds to compile on a high-core machine from scratch? Isn't the end of the world.
Worse is the scope to have to review all those things, if you'd like to use it for your main passwords, that'd be my biggest worry. Luckily most are well established already as far as I can tell.
326 packages is approximately 326 more packages than I will ever fully audit to a point where my employer would be comfortable with me making that decision (I do it because many eyes make bugs shallow).
It's also approximately 300 more than the community will audit, because it will only be "the big ones" that get audited, like serde and tokio.
I don't see people rushing to audit `zmij` (v1.0.19), despite it having just as much potential to backdoor my systems as tokio does.
Chance of someone auditing all of them is virtually zero, and in practice no one audits anything, so you are still effectively blindly trusting that none of those 326 got compromised.
Using crates is a choice. You can write fully independent C++ or you can pull in Boost + Qt + whatever libraries you need. Even for C programs, I find my package manager downloading tons of dependencies for some programs, including things like full XML parsers to support a feature I never plan to use.
Javascript was one of the first languages to highlight this problem with things like left-pad, but the xz backdoor showed that it's also perfectly possible to do the same attack on highly-audited programs written in a system language that doesn't even have a package manager.
Cargo is modeled after NPM. It works more or less identically, and makes adding thousands of transient dependencies effortless, just like NPM.
Rust's stdlib is pretty anemic. It's significantly smaller than node's.
These are decisions made by the bodies governing Rust. It has predictable results.
Ultimately in any language you get the sort of experience you build for yourself with the environment you setup, it is possible in most languages to be more conservative and minimal even if the ecosystem at large is not, but it does require more care and time.
The Rust vs. Node comparison seems very shallow to me, and it seems to require a lot of eye squinting to work.
People have beef with Rust in other, more emotional ways, and welcome the opportunity to pretend they dislike it on seemingly-rational grounds a la "Node bad amirite lol".
I am openly admitting I don't care. Such libraries are in a huge demand and every programming language ecosystem gains them quite early. So to me the risk of malicious code in them is negligibly small.
TL;DR, the official libraries are going to be split into three parts:
---
1) `core.*` (or maybe `lang.*` or `$MYLANGUAGE.*` or w/e, you get the point) this is the only part that's "blessed" to be known by the compiler, and in a sense, part of the compiler, not a library. It's stuff like core type definitions, interfaces, that sort of stuff. I may or may not put various intrinsics here too (e.g. bit count or ilog2), but I don't know yet.
Reserved by the compiler; it will not allow you to add custom stuff to it.
There is technically also a "pseudo-package" of `debug.*` ("pseudo" in the sense that you must always use it in the full prefixed form, you can't import it), which is just going to be my version of `__LINE__` and similar. Obviously blessed by compiler by necessity, but think stuff like `debug.file` (`__FILE__`), `debug.line` (`__LINE__`), `debug.compiler.{vendor,version}` (`__GNUC__`, `_MSC_VER`, and friends). `debug` is a keyword, which makes it de-facto non-overridable by users (and also easy for both IDEs and compiler to reason about). Of course I'll provide ways of overriding these, as to not leak file paths to end users in release builds, etc.
(side-note: since I want reproducible builds to be the default, I'm internally debating even having a `debug.build.datetime` or similar ... one idea would be to allow it but require explicitly specifying a datetime [as build option] in such cases, lest it either errors out, or defaults to e.g. 1970-01-01 or 2000-01-01 or whatever for reproducibility)
---
2) `std.*`, which is minimal, 100% portable (to the point where it'd probably even work in embedded [in the microcontroller sense, not "embedded Linux" sense] systems and such --- though those targets are, at least for now, not a primary goal), and basically provides some core tooling.
Unlike #1, this is not special to the compiler ... the `std.*` package is de jure reserved, but that's not actually enforced at a technical level. It's bundled with the language, and included/compiled by default.
As a rule (of thumb, admittedly), code in it needs to be inherently portable, with maybe a few exceptions here or there (e.g. for some very basic I/O, which you kind of need for debugging). Code is also required to have no external (read: native/upstream) dependencies whatsoever (other than maybe libc, libdl, libm, and similar things that are really more part of the OS than any particular library).
All of `std.*` also needs to be trivially sandboxable --- a program using only `core.*` & `std.*` should not be able to, in any way, affect anything outside of whatever the host/parent system told it that it can.
---
3) `etc.*`, which actually work a lot like Rust/Cargo crates or npm packages in the sense that they're not installed by default ..... except that they're officially blessed. They likely will be part of a default source distribution, but not linked to by default (in other words: included with your source download, but you can't use them unless you explicitly specify).
This is much wider in scope, and I'm expecting it to have things like sockets, file I/O (hopefully async, though it's still a bit of a nightmare to make it portable), downloads, etc. External dependencies are allowed here --- to that end, a downloads API could link to libcurl, async I/O could link to libuv, etc.
---
Essentially, `core.*` is the "minimal runtime", `std.*` is roughly a C-esque (in terms of feature count, or at least dependencies) stdlib, and `etc.*` are the Python-esque batteries.
Or to put it differently: `core.*` is the minimum to make the language run/compile, `std.*` is the minimum to make it do something useful, and `etc.*` is the stuff to make common things faster to make. (roughly speaking, since you can always technically reimplement `std.*` and such)
I figured keeping them separate allows me to provide for a "batteries included, but you have to put them in yourself" approach, plus clearly signaling which parts are dependency-free & ultra-sandbox-friendly (which is important for embedding in the Lua/JavaScript sense), plus it allows me to version them independently in cases of security issues (which I expect there to be more of, given the nature of sockets, HTTP downloads, maybe XML handling, etc).
Cargo made its debut in 2014, a year before the infamous left-pad incident, and three years before the first large-scale malicious typosquatting attacks hit PyPI and NPM. The risks were not as well-understood then as they are today. And even today it is very far from being a solved problem.
That's a damning indictment of Rust. Something as big as Chrome has IIRC a few thousand dependencies. If a simple password manager CLI has hundreds, something has gone wrong. I'd expect only a few dozen
Frustratingly, they're not by default though; you need to explicitly use `--locked` (or `--frozen`, which is an alias for `--locked --offline`) to avoid implicit updates. I've seen multiple teams not realize this and get confused about CI failures from it.
The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
Not quite: "1.2.3" = "^1.2.3" = ">=1.2.3, <2.0.0" in Cargo [0], and "1.2" = "^1.2.0" = ">=1.2.0, <2.0.0", so you get the "1.x.x" behavior either way. If you actually want the "1.2.x" behavior (e.g., I've sometimes used that behavior for gmp-mpfr-sys), you should write "~1.2.3" = ">=1.2.3, <1.3.0".
[0] https://doc.rust-lang.org/cargo/reference/specifying-depende...
From thinking it through more closely, it does actually seem like it might be a little safer to avoid specifying the patch version; it seems like putting 1.2.3 would fail to resolve any valid version in the case that 1.2.2 is the last non-yanked version and 1.2.3 is yanked. I feel like "1.2.3" meaning "~1.2.3" would have been a better default, since it at least provides some useful tradeoff compared to "1.2", but with the way it actually works, it seems like putting a full version with no operator is basically worse than either of the other options, which is disappointing.
If however `Cargo.toml` has changed then `cargo build` will have to recalculate the lockfile. Hence why it can be useful to be explicit about `cargo build --locked`.
Do you think it's an actively bad practice, completely benign, or something in between where it makes sense in some cases but probably should still be avoided in others? Offhand, the only variable I can think of that might influence a different choice is that maybe closed-source packages been reused within a company (especially if trying to interface with other package management systems, which I saw firsthand when working at AWS but I'm guessing is something other large companies would also run into), but I'm curious if there are other names nuances I haven't thought of
It’s not exactly a tough nut to crack: it changed 2-ish years ago after guidance (and cargo’s defaults) changed: https://blog.rust-lang.org/2023/08/29/committing-lockfiles/
Some people might argue that changing a function to return an error where it didn't previously would be a breaking change; I'd argue that those people are wrong about what semver means. From what I can tell, people having their own mental model of semver that conflicts with the actual specification is pretty common. Most of the time when I've had coworkers claim that semver says something that actively conflicts with what it says, after I point out the part of the spec that says something else, they end up still advocating for what they originally had said. This is fine, because there's nothing inherently wrong with a version schema other than semver, but I try to push back when the term itself gets used incorrectly because it makes discussions much more difficult than they need to be.
No wonder...
The problem is that you also want to update deps.
Or, conversely, encourage programming languages to increase the number of features in their standard libraries.
It's hard for me to take seriously any suggestion that .NET is a model for how ecosystems should approach dependency management based on that, but I guess having an abysmal experience when there are dependencies is one way to avoid risks. (I would imagine it's probably not this bad on Windows, or else nobody would use it, but at least personally I have no interest in developing on a stack that I can't expect to work reliably out of the box Linux)
But PSA: If something is critical to the business and you’re using npm, pin your dependencies. I’ve had this debate with other devs throughout the years and they usually point to the lockfile as assurance, but version ranges with a ^ mean that when the lockfile gets updated, you can pull in newer versions you didn’t explicitly choose.
If what you're building can put your company out of business it's worth the hassle.
We have things like dependebot for this.
https://docs.github.com/en/code-security/tutorials/secure-yo...
Security patches aren't like bugs or features where you can just roll a new version. Often patches need to be backported to older versions allowing software and libraries to be "upgraded" in place with no other change introduced.
Say you had software that controlled the careful mix of chemicals introduced into a municipal water supply. You just don't move from version 1.4 to 3.2, you fix 1.4 in place.
Yes, if they all just backport security patches we'll be fine. No, people are not going to just.
What you're looking for are Debian stable packages. :p
I promptly removed the bw cli programme after that, and I definitely won't be installing it again.
I use ghostty if it matters.
I found the default bwcli clunky and unacceptable, and it's why I don't use it, even though I still have a BitWarden subscription.
I can't think of a plausible explanation for how bw is at fault for its terminal output ending up, across a ssh session and tmux invocation, in the chat history of weechat. Even if bw auto-copied its output to the clipboard (which as far as I could tell by glancing at the cli options, it doesn't and can't), and the clipboard is auto-copied to remote hosts, clipboard contents shouldn't appear in an irc client's history without explicit hacking to do that.
The claim is just noise, particularly because it doesn't seem to have ever been investigated.
It seems prudent, if someone wants to use a cli, to use rbw rather than bw, or even just pass or keypassxc-cli (and self-managed cloud backup or syncing). However, that's based on bw being a javascript mess, not based on the unlikely event of bw injecting its output through ssh into irc clients.
The full strength of the SOP applies by default. CORS is an insecurity feature that relaxes the SOP. Unless you need to relax the SOP, you shouldn't be enabling CORS, meaning you shouldn't be sending an Access-Control-Allow-Origin header at all.
If your front-end at www.example.com makes calls to api.example.com, then it's simple enough to just add www.example.com to CORS.
So I do local dev on https://local.qa.yourappnamehere.com
> no synchronized password manager is safe
Care to elaborate? I'd agree that the security/availability tradeoff is different, but "not safe" is as nonsensical a blanket statement as "all/only offline/paper-based/... password managers are safe".
There is a time and place for where it makes sense and a password manager CLI written in TypeScript importing hundreds of third-party packages is a direct red flag. It is a frequent occurrence.
We have seen it happen with Axios which is one of the biggest supply chain attacks on the Javascript / Typescript ecosystem and it makes no sense to build sensitive tools with that.
But how else are you going to check if a number is even or odd? Remember, the ONLY design goal is not repeating yourself (or in fact anything anyone has ever thought of implementing).
They probably caused it themselves, somehow, and then blamed bitwarden. Note in the original comment they aren't even entirely sure what the command was, and they weren't familiar with it or they wouldn't have been surprised by its output... so how can they be sure what else they did between that command and the weechat thing?
If the terminal or tmux fed terminal history into weechat, that's also not bw's problem.
I know this because I had the same surprised reaction
Quite bizarre to think much much of my well-being depends on those secrets staying secret.
If you're used to the clunkier workflow of copy-pasting from a separate app, then it's much easier to absent-mindedly repeat it for a not-quite-right url.
I have 1Password configured to require password to unlock once per 24 hours. Rest of the time I have it running in the background or unlock it with TouchID (on the MacBook Pro) or FaceID (on the iPhone).
It also helps that I don’t really sign into a ton of services all the time. Mostly I log into HN, and GitHub, and a couple of others. A lot of my usage of 1Password is also centered around other kinds of passwords, like passwords that I use to protect some SSH keys, and passwords for the disk encryption of external hard drives, etc.
Also a great way of missing out on one of the best protections of password managers; completely eliminating phishing even without requiring thinking. And yes, still requires you to avoid manually copy-pasting without thinking when it doesn't work, but so much better than the current approach you're taking, which basically offers 0 protection against phishing.
Rouge browser extensions for example could redirect you away from the bank website (if the bank website has poor security) when you go there, so even if you use the URL from the password manager, if you don't use the autofill feature, you can still get phished. And if the autofill doesn't show, and you mindlessly copy-paste, you'd still get phished. It's really the autofill that protects you here, not the URL in the password manager.
Concretely, I think for redirect browser extension users I'd use "webRequest" permission, while for in page access you'd need a content-script for specific pages, so in practice they differ in what the extension gets access to.
Likewise I have links in the bookmarks bar on desktop.
I use these links to navigate to the main sites I use. And log in from there.
I don’t really need to think that way either.
But I agree that eliminating the possibility all-together is a nice benefit of using the browser integration, that I am missing out on by not using it.
It's better, but calling it so much better [that it's unreasonable to forgo the browser extension] is a bit silly to me.
1. Go to website login page
2. trigger the global shortcut that will invoke your password manager
3. Your password manager will appear with the correct entry usually preselected, if not type 3 letters of the site's name.
4. Press enter to perform the auto type sequence.
There, an entire class of exploits entirely avoided. No more injecting third party JS in all pages. No more keeping an listening socket in your password manager, ready to give away all your secrets.
The tradeoff? You now have to manually press ctrl+shift+space or whatever instead when you need to log in.
When you use autofill, the native application will prompt to disclose credentials to the extension. At that point, only those credentials go over the wire. Others remain inaccessible to the extension.
(plug: released a small CLI to auto-configure these — https://depsguard.com — I tried to find something that will help non developers quickly apply recommended settings, and couldn't find one)
We need to either screen everybody or cut of countries like North Korea and Iran from the Internet.
My two most precious digital possessions - my email and my Bitwarden account - are protected by a Yubikey that's always on my person (and another in another geographical location). I highly recommend such a setup, and it's not that much effort (I just keep my Yubikey with my house keys)
I got a bit scared reading the title, but I'm doing all I can to be reasonably secure without devolving into paranoia.
Maybe the web vault, but then we do not know when it's compromised (that's the whole idea); so we trust them not to've made a mess...
tl;dr
- https://cooldowns.dev
- https://depsguard.com
(disclaimer: I maintain the 2nd one, if I knew of the first, I wouldn't have released it, just didn't find something at that time, they do pretty much the same thing, mine in a bit of an overkill by using rust...)
That password cannot be cracked because it will always display as ** for anyone else.
My password is *****. See? It shows as asterisks so it's totally safe to share. Try it!
... Scnr •́ ‿ , •̀
So bold and so cowards at the same time...
Is this a serious question?
It's highly unlikely that the people behind an attack like this would come out (non-anonimously) and take credit. And it's unlikely they'll be caught. So does it matter to most peoplee if it's Russians, Americans, Iranians, North Koreans, or some other country?
If you're a 3-letter agency, you'd want to know and potentially arrest them, but as a random guy on the internet, or even a maintainer, I really don't think it matters.
Why would you steal the key when you're already in the house ?
And for the high profile, like some Iranian scientist who has the code to something important, they wouldn't use things like bitwarden.
I really see no use case when the nsa would need access to your bitwarden vault.
Not really, we already know that NSA attempts shit like this all the time, if that came out, it'd be the same as the Snowden leaks meaning, a bunch of nerds going "Huh, who could have predicted this?". I don't see the point in it being Russia, China or the US, I'd like it as much if the US did it as Russia, so that's why I asked why it matters.
for threat intel people, a lot.
obvious misdirection, but it does serve to make it very obvious it was a state actor.
Lol no, lots of groups do this, non-state ones too.
I've managed to avoid several security breaches in last 5 years alone by using KeePass locally on my own infra.
Bitwarden vaults were not compromised, there was a problem in a tool you used to access the secrets.
What makes it impossible for KeePass access tools to have these issues?
the superiority of keepass users scares away the bad actors
I'd say since it is a local only tool, you don't really need to update it constantly provided you are a sane person that don't use a browser extension. It makes it easier to audit and yourself less at risk of having your tool compromised.
It doesn't have to be keypass though, it can be any local password management tool like pass[1] or its guis or simply a local encrypted file.
[1] https://www.passwordstore.org/
In any case, the fact that the official BitWarden client (which uses Electron btw) and even the CLI is written in Javascript/Typescript - should tell you everything you need to know about their coding expertise and security posture.
If someone links me to "rnicrosoft.com" with a perfectly cloned login page, my eyes might not notice that it's a phishing link, but my browser extension will refuse to autofill, and that will cause me to notice.
Phishing is one of the most common attacks, and also one of the easiest to fall for, so I think using the browser extension is on-net more secure even though it does increase your attack surface some.
I know proper 2fa, like webauthn/fido/yubikeys, also solves this (though totp 2fa does not), but a lot of the sites I use do not support a security key. If all my sites supported webauthn, I think avoiding the browser extension would be defensible.
Sure there may be existence of typosquatting here and there but they tend to be much easier to spot vs the phising url using unicode variants.
If I ever need to fill the login, I just do any of these:
- KeepassXC has auto-type feature, so I just choose the needed one and let it auto-type - I enable the extension only when I need to log in and choose the one I need to fill (not auto-fill, but only fill when I click on the account from the extension pop-up dashboard).
By the way, syncthing can manage conflicts by keeping one copy of the file with a specific name and date. You can also decide is one host is the source of truth.
I'd go further than that and say for me personally, the fact it's just a file is a selling point, not a "good enough" concession. I can just put passwords.kdbx alongside my notes.txt and other files (originally on a thumbdrive, now on my FTP server) - no additional setup required.
There will be people who use multiple devices but don't already have a good way to access files across them, but even then I'm not fully convinced that SaaS specifically for syncing [notes/passwords/photos/...] really is the most convenient option for them opposed to just being a well-marketed local maximum. Easy to add one more subscription, easy to suck it up when terms changes forbid you syncing your laptop, easy to pray you're not affected by recurring breaches, ... but I'd suspect often (not always) adds up to more hassle overall.
Plus, now you're responsible for everything. Backups, auditing etc.
To date there have been zero instances when I needed to significantly change a password/service/login/credential solely from my phone and I was unable to access my laptop.
Additionally the file gets synchronized to a workstation that sits in my home office accessible by personal VPN, where it can be accessed in a shell session with the keepass CLI: https://tracker.debian.org/pkg/kpcli
You can use an extremely wide variety of your own choice of secure methods for how to get the file from the primary workstation (desktop/laptop) to your phone.
Password habits for many people are now decades-old, and very difficult to break.
So gradually I don't feel I need syncing that much any more and switched to Keepass. I made my mind that I'll only change the database from my computer and rclone push that to any cloud I like (I'm using Koofr for that since it's friendly to rclone) then in any other devices I'll just rclone pull them after that when needed. If I change something in other devices (like phones), I'll just note locally there and change the database later.
But ofc if someone needs to change their data/password frequently then Bitwarden is clearly the better choice.
https://cyberpress.org/hackers-exploit-keepass-password-mana...
This wasn't a case where KeePass was compromised in any way, as far as I can tell. This appears to be a basic case of a threat actor distributing a trojanized version via malicious ads. If users made sure they are getting the correct version, they were never in danger. That's not to say that a supply chain attack couldn't affect KeePass, but this article doesn't say that it has.
Long term keepass users aren't going to be affected. If you mention software to others make sure you send them a link to a known safe download location instead of having them search for one (as new users searching like that are more at risk of stumbling on a malicious copy of the official site hosting a hacked version).
It's only a matter of time until _they_ are also popped :(.
> The beacon established command and control over HTTPS
Once the compromise point is preinstall, the usual "inspect after install" mindset breaks down. By then the payload has already had a chance to run.
That gets more interesting with agents / CI / ephemeral sandboxes, because short exposure windows are still enough when installs happen automatically and repeatedly.
Another thing I think is worth paying attention to: this payload did not just target secrets, it also targeted AI tooling config, and there is a real possibility that shell-profile tampering becomes a way to poison what the next coding assistant reads into context.
I work on AgentSH (https://www.agentsh.org), and we wrote up a longer take on that angle here:
https://www.canyonroad.ai/blog/the-install-was-the-attack/
And besides, you could always pull the package and inspect before running install, which unless you really know the installer and understand/know guarantees deeply (e.g., whether it’s possible for an install to deploy files outside of node_modules) it’s insane to even vaguely trust it to pull and unpack potentially malicious code.
https://mrshu.github.io/github-statuses/
ActionPin — a GitHub Actions hardening checker that flags unpinned third-party actions, overbroad workflow permissions, install scripts that touch secrets, and agent-triggered jobs that can reach production credentials. ActionPin host on github.
I don’t even trust GitHub’s own actions. I used to use only the one to checkout a repository, limited to a specific tag, but then realised that even a tagged version, if it has dependencies which are not themselves tagged, could be compromised, so I stopped and now do the checkout myself. It’s not even that many lines of code, the fact GitHub has a huge npm package with dependencies to do something this basic is insane.
How many times will this happen before people realise that updating blind is a poor decision?
Keep the password manager as a separate desktop app and turn off auto update.
The original pass is just a single shell script. It's short, pretty easy to read and likely in part because it's so simple, it's also very stable. The only real dependencies are bash, gnupg and optionally git (history/replication). These are most likely already on your machine and whatever channel you're getting them from (ex: distribution package manager) should be much more resilient to supply chain vulnerabilities.
It can also be used with a pgp smartcard (in my case a Yubikey) so all encryption/decryption happens on the smartcard. Every attempt to decrypt a credential requires a physical button press of the yubikey, making it pretty obvious if some malware is trying to dump the contents of the password store.
I wrote a version in Python and then rust back before the official CLI was released. Now you can use https://github.com/doy/rbw instead, much better maintained (since I don't use Bitwarden anymore).
The practical differences to me:
All that said, I still happily recommend BW, especially for people that are cost-conscious, the free BW plan is Good Enough for most everyone.Security wise, they are equivalent enough to not matter.
Edit: The CLI itself apparently does not, which will have limited the damage a bit, but if it's installed as a snap, it might. Incidents like this should hopefully cause a rollback of this dumb system of forcefully and frequently updating people's software without explicit consent.
Also the time range provided in https://community.bitwarden.com/t/bitwarden-statement-on-che... can help with knowing if you were at risk. I only used the CLI once in the morning yesterday (ET), so I might not have been affected?
Assuming you had it already installed, you would be safe.
I've purged the snap. Really should purge snapd completely.
https://github.com/raycast/extensions/blob/6765a533f40ad20cc...
I've also been preferring to roll things on my own in my side projects rather than pulling a package. I'll still use big, standalone libraries, but no more third-party shims over an API, I'll just vibe code the shim myself. If I'm going to be using vibe code either way, better it be mine than someone else's.
Aside from passwords, I store passkeys, secure notes, and MFA tokens.
We recently adopted it at work, and I find the thing to just produce garbage. I've never tuned out noise so quickly.
you have to appreciate the irony of a thing that's supposed to help protect you from vulnerabilities being one.
That thing is expensive as he'll and used by lots of huge corps. I know at least one very large one in Mexico ... where the IT team is pretty useless.
So, I dont doubt the possibility that in the short future we will hear about more hacks.
https://github.com/remorses/sigillo
> Bitwarden’s Chrome extension, MCP server, and other legitimate distributions have not been affected yet.
It is mind boggling how an app that just lists a bunch of items can be so bloated.
The irony! The security "solution" is so often the weak link.
Meanwhile, Bitwarden themselves state that end users were almost never affected: https://community.bitwarden.com/t/bitwarden-statement-on-che...
You had to install the CLI through NPM at a very short time frame for it to be affected. If you did get infected, you have to assume all secrets on your computer were accessed and that any executable file you had write access to may be backdoored.
It is:
- open source
- accountless(keys are identity)
- using a public git backend making it easily auditable
- easy to self host, meaning you can easily deploy it internally
- multisig, meaning event if GitHub account is breached, malevolent artifacts can be detected
- validating a download transparantly to the user, which only requires the download url, contrary to sigstore
I wrote my own password generator - it's stateless, which has the advantage that I never have to back up or sync any data between devices. It just lets you enter a very long, secure master password, service name and a username then runs an scrypt hash on this with good enough parameters to make brute-force attacks unfeasible.
For anything important, I also use 2FA.
Also didn't Microsoft (the owner of GitHub) got access to Claude Mythos in order to "seCuRe cRitiCal SoftWaRe InfRasTructUre FoR teh AI eRa"? Hows securing GitHub Action going for them?
I lean toward cooldown by default, and bypass it when an actual reachable exploitable ZeroDay CVE is released.
"a malicious package that was briefly distributed"
"investigation found no evidence that end user vault data was accessed or at risk"
"The issue affected the npm distribution mechanism for the CLI during that limited window, not the integrity of the legitimate Bitwarden CLI codebase or stored vault data."
"Users who did not download the package from npm during that window were not affected."
Downplaying so hard it's disgusting. Bitwarden failed and became a vector of attack. A vendor who is responsible for all my passwords. What a joke. All trust lost: by the incident and comms-style. Time to move before they make an even bigger mistake.
Praying to the security gods.
It seems like we've have non-stop supply chain attacks for months now?
on a more serious note. i told you so levels reaching new heights. dont use password managers. dont handoff this type of risk to a third party.
its like putting all your keys in a flimsy lockbox outside of your appartment. at some point someone will trip over it, find the keys and explore -_-.
it being impractical with the amount of keys/passwords you need to juggle?
not an excuse. problem should and can be solved differently.
> Defend against hackers and data breaches
> Fix at-risk passwords and stay safe online with Bitwarden, the best password manager for securely managing and sharing sensitive information.
yep. literally from their website this moment..and the link to their "statement"[0] is nowhere on the front page.
Oh wait, there is a top banner..."Take insights to action: Bitwarden Access Intelligence now available Learn more >" nope.
[0]: https://community.bitwarden.com/t/bitwarden-statement-on-che...
If you see any package that has hundreds of libraries, that increases the risk of a supply chain attack.
A password manager does not need a CLI tool.
[0] https://news.ycombinator.com/item?id=47585838
A password manager absolutely does need a CLI tool??
Why not? Even macos keychain supports cli.
I don't think macOS Keychain uses NPM and it isn't in TypeScript or Javascript and, yes it does not need a CLI either.
The NPM and Java/Typescript ecosystem is part of the problem that encourages developers to import hundreds of third-party libraries, due to its weak standard library which it takes at least ONE transitive dependency to be compromised and it is game over.
You still have not said why this is an issue of having a CLI.
I complained about both. What does this say from the start?
>> Once again, it is in the NPM ecosystem.
> You still have not said why this is an issue of having a CLI.
Why do you need one? Automation reasons? OpenClaw? This is an attractive way for an attacker to get ALL your passwords in your vault. The breach itself if run in GitHub Actions would just make it a coveted target to compromise it which makes having one worse not better and for easier exfiltration.
So it makes even more sense for a password manager to not need a CLI at all. This is even before me mentioning the NPM and the Javascript ecosystem.
I need one because I am not always using a graphical interface. What exactly in a GUI do you think makes it harder/less attractive for an attacker?
If the GUI code is compromised in the same way as the CLI, it'll have the same level of access to your vault as soon as you enter your master password, exactly the same as in the CLI.
JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
The point is the risk is far higher with more dependencies as I said from the very start. But it happens much more frequently in the NPM ecosystem than in others.
> If you are advocating developing without dependencies at all, then please start (with any language) and show us all how much you actually ship.
The languages in the former (especially Go) encourages you to use the standard library when possible. Javascript / TypeScript does not and encourages you to import more libraries than you need.
> JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
Nope. It is a target because of the necessity for developers to import random packages to solve a problem due to its weak standard library and the convenience that comes with installing them.
You certainly have a Javascript bias towards this issue yourself and there is clearly a problem and you ignoring it just makes it worse.
If it wasn't an issue, we would not be talking about yet another supply chain attack in the NPM ecosystem.
Not to mention that a graphical application is just as vulnerable to supply chain attacks.
That's a wild statement. The CLI is just another UI.
The problem in this case is JS and the NPM ecosystem. Go would be an improvement, but complexity is the enemy of security. Something like (pass)age is my preference for storing sensitive data.