Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
d.sb
Mastodon: @dan

This profile is from a federated server and may be incomplete. View on remote instance

dan , (edited )
@dan@upvote.au avatar

tl;dr there were two leaks: A Microsoft employee had compiler issues and attached the code to a publicly-visible bug report, and Microsoft's public symbol server had debug symbols for the library (which makes it a lot easier to reverse engineer and debug the production build in a debugger).

Did the employee that accidentally leaked it think that the public developer community was an internal bug tracker? Strange. I wonder if Microsoft do actually use the same site for both internal and external bugs and the employee just selected the wrong category when posting. Seems like an unnecessary risk.

dan ,
@dan@upvote.au avatar

This will have been pulled in as a dependency in many projects and the site either works or does not based on the presence of the bundle.

This wasn't bundled. People inserted a script tag pointing to a third-party CDN onto their sites. The output changes depending on the browser (it only loads the polyfills needed for the current browser) so you can't even use a subresource integrity hash.

dan ,
@dan@upvote.au avatar

In this case the script wasn't bundled at all - it was hotlinked from a third party CDN. Adding malicious code instantly affects all the sites that load it.

The output differs depending on browser (it only loads the polyfills your browser needs) so it's incompatible with subresource integrity.

dan ,
@dan@upvote.au avatar

Reposting my comment from Github:

A good reminder to be extremely careful loading scripts from a third-party CDN unless you trust the owner 100% (and even then, ownership can change over time, as shown here). You're essentially giving the maintainer of that CDN full control of your site. Ideally, never do it, as it's just begging for a supply chain attack. If you need polyfills for older browsers, host the JS yourself. :)

If you really must load scripts from a third-party, use subresource integrity so that the browser refuses to load it if the hash changes. A broken site is better than a hacked one.


And on the value of dynamic polyfills (which is what this service provides):

Often it's sufficient to just have two variants of your JS bundles, for example "very old browsers" (all the polyfills required by the oldest browser versions your product supports) and "somewhat new browsers" (just polyfills required for browsers released in the last year or so), which you can do with browserslist and caniuse-lite data.

dan , (edited )
@dan@upvote.au avatar

My favourite part is that the developers that currently own it said:

Someone has maliciously defamed us. We have no supply chain risks because all content is statically cached

https://github.com/polyfillpolyfill/polyfill-service/issues/2890#issuecomment-2191461961

Completely missing the point that they are the supply chain risk, and the fact that malicious code was already detected in their system (to the point where Google started blocking ads for sites that loaded polyfill .io scripts.

We don't even know who they are - the repo is owned by an anonymous account called "polyfillpolyfill", and that comment comes from another anonymous account "polyfillcust".

dan , (edited )
@dan@upvote.au avatar

You'd be surprised how much code people blindly reuse without even looking at it, especially in JavaScript. A bunch of it is from projects owned by random individuals. The JS standard library is ridiculously small, so nearly all JS apps import third-party code of some sort. One JS framework can pull in hundreds of third-party modules.

It's much less of an issue with languages like C# and even PHP, where the first-party libraries are often sufficient for building a small or mid-sized app.

dan ,
@dan@upvote.au avatar

Yeah, it really depends on how much you trust the vendor.

Google? Say what you want about the company, but they'll never intentionally serve malware.

Random company with no track record where we don't even know who is maintaining the code? Much less trustworthy. The polyfill . io repo is currently owned by a Github user called "polyfillpolyfill" with no identifying information.

Third-party CDNs make less sense these days though. A lot of hosting services have a CDN of some sort. Most sites have some sort of build process, and you usually bundle all your JS and CSS (both your code and third-party code, often as separate bundles) as part of that.

dan ,
@dan@upvote.au avatar

Is it hosted on New Pied Piper?

dan ,
@dan@upvote.au avatar

I don't understand why this is a bad thing? Open source code is designed to be shared/distributed, and an open-source license can't place any limits on who can use or share the code. Git was designed as a distributed, decentralized model partly for this reason (even though people ended up centralizing it on Github anyways)

They might end up using the code in a way that violates its license, but simply cloning it isn't a problem.

dan ,
@dan@upvote.au avatar

I expect it’s going likely to be used to train some Chinese AI model.

Even if they do that, the license for open source software doesn't disallow it from being done.

dan ,
@dan@upvote.au avatar

Most licences require derivative works to be under the same or similar licence

Some, but probably not most. This is mostly an issue with "viral" licenses like GPL, which restrict the license of derivative works. Permissive licenses like the MIT license are very common and don't restrict this.

MIT does say that "all copies or substantial portions of the Software" need to come with the license attached, but code generated by an AI is arguably not a "substantial portion" of the software.

dan ,
@dan@upvote.au avatar

with mails that dont correspond to the original authors,

Oh! I didn't realise this. Do you have an example?

dan ,
@dan@upvote.au avatar

US will try its best to block technology, including open source projects.

You can't block open source projects from anyone. That's the entire point of open source. For a license to be considered open-source, it must not have any limitations as to who can use it.

dan ,
@dan@upvote.au avatar

I agree with you, and don't really have any answers :)

Tesla is recalling its Cybertruck for the fourth time to fix problems with trim pieces that can come loose and front windshield wipers that can fail | The new recalls each affect over 11,000 trucks ( apnews.com )

The company says in the documents that the front windshield wiper motor controller can stop working because it’s getting too much electrical current. A wiper that fails can cut visibility, increasing the risk of a crash. The Austin, Texas, company says it knows of no crashes or injuries caused by the problem....

dan ,
@dan@upvote.au avatar

Venmo and CashApp

Why would you use either of these when Zelle exists and is built into your bank's app?

Third-party money transfer apps are very rare in a lot of non-US countries, because people just transfer money using their bank account. They're only popular in the USA because US banks were so far behind in terms of technology compared to the rest of the developed world.

Even Apple finally admits that 8GB RAM isn't enough ( www.xda-developers.com )

There were a number of exciting announcements from Apple at WWDC 2024, from macOS Sequoia to Apple Intelligence. However, a subtle addition to Xcode 16 — the development environment for Apple platforms, like iOS and macOS — is a feature called Predictive Code Completion. Unfortunately, if you bought into Apple's claim that...

dan ,
@dan@upvote.au avatar

They didn't have a reason to switch to USB-C, and several reasons to avoid it for as long as possible. Their old Lightning connector (and the big 30-pin connector that came before it) was proprietary, and companies had to pay a royalty to Apple for every port and connector they manufactured. They made a lot of money off of the royalties.

How do I get phone notifications from my server while I'm not connected to my home network?

Hey guys. Im running Home Assistant in docker container for few years and I'm super happy with it. The only way I access my server when not home is wireguard VPN. I noticed that I'm still receiving notifications even when not connected to VPN. I wonder how is that possible?...

dan , (edited )
@dan@upvote.au avatar

Notifications go through Google Firebase servers. This is documented here: https://companion.home-assistant.io/docs/notifications/notification-details/. Your HA server sends the notification to Google, which then sends it to your phone. They don't store the notification they just relay it.

Most mobile apps do something like this. One reason is to improve battery life - your phone can have a single connection to a Google server instead of every app needing its own separate connection.

There used to be a way to use local notifications (meaning you have to be on the same network, either locally or via a VPN), but I can't find the setting any more so maybe it's gone now. (edit: this is still possible)

dan ,
@dan@upvote.au avatar

That's what I was thinking of! It's not in the settings section I'd expect it to be in (notifications) so I thought it wasn't doable any more.

dan ,
@dan@upvote.au avatar

There's a few comments like this in this thread, from people that I guess didn't actually read the post :)

They weren't asking how to do it; they were asking why it works out-of-the-box with the standard Home Assistant notifications.

You don't need ntfy; the standard Home Assistant app notifications work anywhere since they route via Google Firebase.

dan ,
@dan@upvote.au avatar

I don't see anything in that article that says that Google store the contents of the notification. It just says that they link push tokens to emails, which is true - they have to know who to send the push notification to.

In any case, if you don't want Home Assistant notifications being relayed through Google, you can use a persistent connection so that the app connects directly to your Home Assistant server.

dan ,
@dan@upvote.au avatar

You can enable a persistent connection to get alerts directly without relaying them through Google, but then you need to have a connection to your Home Assistant server all the time (eg by using a VPN or by exposing it publicly)

dan , (edited )
@dan@upvote.au avatar

Use TypeScript, and nonsensical things like adding arrays to objects will be compile-time errors.

dan ,
@dan@upvote.au avatar

Most libraries have TypeScript types these days, either bundled directly with the library (common with newer libraries), or as part of the DefinitelyTyped project.

dan ,
@dan@upvote.au avatar

Python supports type hints, but you need to use a type checker like Pyre or Pyright to actually check them. Python itself doesn't do anything with the type hints.

dan ,
@dan@upvote.au avatar

You can use WebAssembly today, but you still need some JS interop for a bunch of browser features (like DOM manipulation). Your core logic can be in WebAssembly though. C# has Blazor, and I wouldn't be surprised if there's some Rust WebAssembly projects. I seem to recall that there's a reimplementation of Flash player that's built in Rust and compiles to WebAssembly.

dan , (edited )
@dan@upvote.au avatar

On Linux, input-remapper usually works pretty well to remap the extra buttons. I wonder if it'd work on this AI button.

dan ,
@dan@upvote.au avatar

FTP isn't really used much any more. SFTP (file transfers over SSH) mostly took over, and people that want to sync a whole directory to the server usually use rsync these days.

dan ,
@dan@upvote.au avatar

That's one nice thing about Java. You can bundle the entire app in one .jar or .war file (a .war is essentially the same as a .jar but it's designed to run within a Servlet container like Tomcat).

PHP also became popular in the PHP 4.x era because it had a large standard library (you could easily create a PHP site with no third-party libraries), and deployment was simply copying the files to the server. No build step needed. Classic ASP was popular before it, and also had no build step. but it had a very small standard library and relied heavily on COM components which had to be manually installed on the server.

PHP is mostly the same today, but these days it's JIT compiled so it's faster than the PHP of the past, which was interpreted.

dan ,
@dan@upvote.au avatar

I remember WS_FTP LE leaving log files everywhere. What a pain to clean up.

dan ,
@dan@upvote.au avatar

That's the thing about all the pirate apps (apps like Weyd, Syncler, the now-defunct TVZion, etc). They're made by people that actually care, not by companies that are only in it for the money. The user experience is usually a lot better. One of those apps plus a Real Debrid subscription and you're set.

dan ,
@dan@upvote.au avatar

I've heard that Google might have information about Real Debrid and apps that support it. I cannot confirm or deny this myself.

dan ,
@dan@upvote.au avatar

Rumor has it that apps that use Real Debrid are way easier to use since you can just go to a TV show and watch it. Even a non technical person can use apps like Weyd. Real Debrid supposedly caches torrents on their server so you can instantly stream them over an encrypted connection.

dan , (edited )
@dan@upvote.au avatar

F-Droid is great. My understanding is that apps on F-Droid have to be free (as in freedom), and they build most apps from source so the builds are verifiable - they'll exactly match the source code in the repo. It's not just a developer uploading a random APK that might be completely different from the code in the repo.

Microsoft Edge nags users with a 3D banner to change Windows 11's default browser ( www.windowslatest.com )

Would you use Edge as your default browser on Windows 11 if Microsoft nags you with a 3D banner? Microsoft thinks you would. In a new experiment, which appears to be rolling out to Edge stable on Windows 11, Microsoft has turned on a banner that uses 3D graphics to promote the browser....

dan , (edited )
@dan@upvote.au avatar

I moved from Windows 10 to Fedora/Debian recently. Dual-booting them until I figure out which one I want to use. I've used Debian on servers for 20+ years, but Fedora seems like a great distro too. I switched to Fedora at work too, and I'm enjoying it. At work, I can choose between a MacBook with MacOS, or a Lenovo ThinkStation or X1 Carbon / P1 with Windows or Fedora.

The only Windows-specific app I really cared about was Visual Studio, but Jetbrains Rider is looking like a good replacement. I don't really do any PC gaming any more.

dan , (edited )
@dan@upvote.au avatar

For desktop PC use, I think I'm liking Fedora more than Debian. The newer packages have been useful - Wayland seems less buggy for instance (thankfully I've got an AMD laptop, but unfortunately my desktop has an Nvidia GPU)

I've thought about the Atomic version, but don't really have much time to learn a lot of new stuff at the moment. How different is the workflow with the atomic versions vs the regular Fedora?

dan , (edited )
@dan@upvote.au avatar

In the USA, around 50% of Google traffic and 60% of Facebook traffic goes over IPv6. The largest mobile carriers in the US are nearly entirely IPv6-only too (customers don't get an IPv4 address, just an IPv6 one), using 464XLAT to connect to legacy IPv4-only servers. I'm sure we'd know if routing with IPv6 was slower. Google's data actually shows 10ms lower latency over IPv6: https://www.google.com/intl/en/ipv6/statistics.html#tab=per-country-ipv6-adoption

dan ,
@dan@upvote.au avatar

accidentally get subnets segmented off, no listening ports, have to explicitly configure port forwarding to be able to listen for connections

You can intentionally get that behaviour by using a firewall.

dan , (edited )
@dan@upvote.au avatar

Good luck finding the used ones.

That and the IPv6 address on client systems will periodically rotate (privacy extensions), so the IPs used today won't necessarily be the ones used tomorrow.

(you can disable that of course, and it's usually disabled by default on server-focused OSes)

dan ,
@dan@upvote.au avatar

There is this notion that IPv6 exposes any host directly to the internet, which is not correct.

TP-Link routers used to actually do this. They didn't have an IPv6 firewall at all. In fact they didn't add an IPv6 firewall to their "enterprise-focused" 10Gbps router (ER8411) until October 2023.

dan ,
@dan@upvote.au avatar

It's great that the address space is so large. When designing a new system, you want to make sure it'll hopefully never encounter the same issue as the old system, to ensure you don't have to migrate yet again.

dan , (edited )
@dan@upvote.au avatar

You can use ULAs (unique local addresses) or that purpose. Your devices can have a ULA IPv6 address that's constant, and a public IPv6 that changes. Both can be assigned using SLAAC (no manual config required).

I do this because the /56 IPv6 range provided by my ISP is dynamic, and periodically changes.

dan ,
@dan@upvote.au avatar

NAT is, and has always been, an ugly hack. Why would anyone like it?

dan ,
@dan@upvote.au avatar

You wouldn't need NAT. The ULA is used on the internal network, and the public IP is for internet access. Neither of those need NAT.

dan , (edited )
@dan@upvote.au avatar

If you use a single shared public ip then you’re using some amount of address translation

This is practically never the case with IPv6. Usually, each device gets its own public IP. This is how the IPv4 internet used to work in the old days (one IP = one device), and it solves so many problems. No need for NAT traversal since there's no NAT. No need for split horizon DNS since the same IP works both inside and outside your network.

There's still a firewall on the router, of course.

At least that’s how I understand ipv4 and I don’t think ipv6 is much different.

With IPv6, each network device can have multiple IPs. If you have an internal IP for whatever reason, it's in addition to your public IP, not instead of it.

IPs are often allocated using SLAAC (stateless address auto config). The router tells the client "I have a network you can use; its IP range is 2001:whatever/64, and the client auto-generates an IP in that range, either based on the MAC address (always the same) or random, depending on if privacy extensions are enabled - usually on for client systems and off for servers.

dan , (edited )
@dan@upvote.au avatar

There's no translation between them. With IPv6, one network interface can have multiple IPs. A ULA (internal IP) is only used on your local network. Any internet-connected devices will also have a public IPv6 address.

ULAs aren't too common. A lot of IPv6-enabled systems only have one IP: The private one.

dan ,
@dan@upvote.au avatar

Having a large range has a number of benefits though. Companies that have dozens of IPv4 ranges may be fine with a single IPv6 range, which simplifies routing rules.

A lot of features in IPv6 take advantage of the fact that networks have at least a /64 range (at least if they're built correctly according to RFC4291 and newer specs). SLAAC is a major one: Devices can auto-configure IP addresses without having to use something like a stateful DHCP server.

dan , (edited )
@dan@upvote.au avatar

your external IPs might change, such as when moving between ISPs

This is true

You would NAT a hosts external address to its internal address.

This is usually not true.

If you're worried about your external IP changing (like if you're hosting a server on it), you'd solve it the same way you solve it with IPv4: Using dynamic DNS. The main difference is that you run the DDNS client on the computer rather than the router. If there's multiple systems you want to be able to access externally, you'd habe multiple DDNS hostnames.

dan ,
@dan@upvote.au avatar

No, because there's use cases for systems that aren't connected to the internet. Also, public IPs can be dynamic, so you might not want to rely on them internally.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines