just_another_person ,

Is this a question?

We haven't even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it's technically the opposite.

jwr1 OP ,
@jwr1@kbin.earth avatar

It's a link to an article I found interesting. It basically details why we're still using 64-bit CPUs, just as you mentioned.

fmstrat ,

Comment OP must never learn anything new. Good find.

Cethin ,

Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It's harder to do half. Sure, it'd be slightly faster for some things, but it's not significant.

sugar_in_your_tea ,

And you can get 128-bit data to the CPU, so those things can be fast if we need them to be.

henfredemars ,

And we have wide instructions that can process this data, such as for multimedia applications.

Addressing and memory size has been the historic motivator for wider registers, but it’s probably not going to be in my lifetime that I see the need for 128.

Technus ,

We don't even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

just_another_person ,

[Thread, post or comment was deleted by the author]

  • Loading...
  • MrQuallzin ,

    I think they were just adding to the conversation

    b34k ,

    [Thread, post or comment was deleted by the moderator]

  • Loading...
  • just_another_person ,

    [Thread, post or comment was deleted by the moderator]

  • Loading...
  • Technus ,

    Jesus Christ, what crawled up your ass and died?

    mohammed_alibi ,

    [Thread, post or comment was deleted by the moderator]

  • Loading...
  • xyz1195 ,

    that was a nice chuckle.ty

    Technus ,

    I actually added detail that wasn't already discussed in the article?

    AlphaAutist ,

    I actually didn’t know that about addressing before your comment and so I found it very interesting, thanks

    Technus ,

    Some applications use those unused bits to add tags to pointers but it's important to mask those out before attempting to dereference the address. I'm not sure about ARM but x86-64 requires bits 49-63 to be copies of bit 48 (kinda like sign-extension), ironically to ensure that no one is using those bits to store extra data.

    Voroxpete ,

    Is this a question?

    For the people who don't know the answer? Yes.

    Not everything you see is intended for your consumption. Let people enjoy learning things.

    Cocodapuf ,

    I totally agree. I know a teacher who who likes to say:

    "I believe there really is no such thing as a dumb question. As long as it's an honest question (not rhetorical or sarcastic), then it's a genuine request for more information. So even if it's coming from a place of extreme ignorance, asking a question is an attempt to learn something, and the effort should be applauded."

    mitrosus ,

    Learned from the teacher. Thanks.

    otp ,

    Is this a question?

    Woah, meta.

    Yes, it is.

    This is not a question, though.

    AmidFuror ,

    That would be like 6 minutes abs.

    AnarchoSnowPlow ,

    That's crazy. You can't do six. It's seven! SEVEN MINUTE ABS!

    elbarto777 ,

    What's this in reference to?

    AmidFuror ,

    There's Something about Mary (1998)

    elbarto777 ,

    Ha, cool! It's been a while since I saw that movie.

    Man, 1998?! Time flies.

    unreachable ,
    @unreachable@lemmy.world avatar

    so i guess the next bit after 64 cpu is qu-bit, quantum bit

    Ephera ,

    Quantum computers won't displace traditional computers. There's certain niche use-cases for which quantum computers can become wildly faster in the future. But for most calculations we do today, they're just unreliable. So, they'll mostly coexist.

    UraniumBlazer ,

    In other words like GPUs. GPUs suck ass at complex calculations. They however, work great for a large number of easy calculations, which is what is needed for graphics processing.

    amanda ,
    @amanda@aggregatet.org avatar

    Presumably you’d have a QPU in your regular computer, like with other accelerators for graphics etc, or possibly a tiny one for cryptography integrated in the CPU

    Tinidril ,

    There would have to be some kind of currently unforseen breakthroughs before something like that would be even remotely possible. In all likelihood, quantum computing would stay in specialized data centers. For the problems quantum would solve, there is really no advantage to having it local anyways.

    amanda ,
    @amanda@aggregatet.org avatar

    I assume we need a lot of breakthroughs to even have useful quantum computing at all, but sure.

    Isn’t quantum encryption interesting for end users?

    hades ,

    Quantum encryption isn't something quantum computers can even do. It's not just transforming bits into other bits, it's about building entirely new security properties based on physical properties of matter.

    So, even if it is interesting for end users, they would need dedicated hardware anyway.

    amanda ,
    @amanda@aggregatet.org avatar

    Shows how much I know! (Nothing)

    magic_lobster_party ,

    Probably not in consumer grade products in any foreseeable future.

    irotsoma ,
    @irotsoma@lemmy.world avatar

    Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven't even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

    tunetardis ,

    I know a google engineer who was saying they're having to update their code bases to handle > 16 exabytes of storage, if you can imagine. But yeah, that's storage, not RAM.

    Appoxo ,
    @Appoxo@lemmy.dbzer0.com avatar

    I would kind of enjoy the trouble of needing to store and owning the place for 16 exabytes...

    ArbiterXero ,

    32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

    amanda ,
    @amanda@aggregatet.org avatar

    Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

    pivot_root , (edited )

    Only slightly related, but here's the compiler flag to disable an arbitrary 2GB limit on x86 programs.

    Finding the reason for its existence from a credible source isn't as easy, however. If you're fine with an explanation from StackOverflow, you can infer that it's there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.

    AnyOldName3 ,
    @AnyOldName3@lemmy.world avatar

    It's a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you're compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.

    wizardbeard ,
    @wizardbeard@lemmy.dbzer0.com avatar

    You can also toggle it on precompiled binaries with the right tool (or a hex editor if you're insane), which was my main use case. Lots of old games that never got 64-bit releases that benefit from having access to the extra RAM, especially if you're modding them. It's a great way to avoid out of memory crashes.

    ArbiterXero ,

    Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

    Also the entire article comes down to simple math.

    Bits is the number of digits.

    So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

    So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

    10^4 is WAY smaller than 10^8

    neclimdul , (edited )

    It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It's more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

    A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

    https://en.m.wikipedia.org/wiki/3_GB_barrier

    amanda ,
    @amanda@aggregatet.org avatar

    Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

    ms_lane ,

    Only on consumer Windows.

    Windows Server never had the problem. But wouldn't allow Creative Labs drivers to be installed either...

    lemmyvore ,
    aard ,
    @aard@kyu.de avatar

    You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

    ArbiterXero ,

    Yeah I acknowledged the shortcomings in a different comment.

    It was a duct take solution for sure.

    Blue_Morpho ,

    Your other posts didn't reply to your claim that it is a Windows only problem. Linux did and some distros (Raspberry Pi) have the same limitations as Windows 95.

    32 bit Windows XP got PAE in 2001, two years after Linux. 64 bit Windows came out in 2005.

    ArbiterXero ,

    I’m not overly worried about a few random Linux distros that did strange things, nor raspberry pi’s. I mean I don’t know why you’d use 32 bit on an 8gb pi anyways, so it shouldn’t affect anyone unless they did something REALLY strange.

    For the average user, neither of those scenarios mattered, especially back when the problem was at its peak.

    2 years was a long time to wait to use the extra memory that Linux could use out of the box.

    I honestly don’t even remember XP having PAE, but if you NEED the validation, sure, Microsoft EVENTUALLY got it.

    Except that Microsoft removed it in SP2 LOL!

    And all the home use versions of XP still maxed out at 4gb.

    There could see the memory but couldn’t use it, oh I’d forgotten that!

    Wikipedia was a fun read.

    Blue_Morpho ,

    2 years was a long time to wait to use the extra memory that Linux could use out of the box.

    For 8 years, Linux had the same limitations as Windows. Then for 2 years it was ahead. Pae could always be turned back on with a boot switch. Going back 25 years to criticize Windows is kind of weird but you do you.

    (I run Linux on a variety of PCs, SBC's, and VM's in my house. I just get annoyed by unjustified Linux fanboyism.)

    ArbiterXero ,

    Not just for 2 years, XP removed it in sp2.

    And even when it supported it, many versions wouldn’t let you use it, or would let you “see” it but not use it.

    For basically the life of XP.

    Blue_Morpho ,

    And as I said, it could still be enabled with a boot switch.

    It's not like all distros in 1999 had PAE enabled by default. You had to find a pae enabled kernel.

    And Linux PAE has been buggy off and on for 20 years:

    "It worked for a while, but the problem came back in 2022. "

    https://flaterco.com/kb/PAE_slowdown.html

    Blue_Morpho ,

    I'm not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.

    Moobythegoldensock ,

    Not really, Raspberry Pi had that same issue with its 32 bit distros.

    amanda ,
    @amanda@aggregatet.org avatar

    The comments on this one really surprised me. I thought the kinds of people who hang out on XDA-developers were developers. I assumed that developers had a much better understanding of computer architecture than the people commenting (who of course may not be representative of all readers).

    I also get the idea that the writer is being vague not to simplify but because they genuinely don’t know the details, which feels even worse.

    sandalbucket ,

    I think it’s a D-tier article. I wouldn’t be surprised if it was half gpt. It could have been summarized in a single paragraph, but was clearly being drawn out to make screen real-estate for the ads.

    SaltySalamander ,
    @SaltySalamander@fedia.io avatar

    The majority of articles I come across are exactly like this, needlessly drawing everything out to maximize word count and, thus, maximize ad space.

    kerrigan778 ,

    Uh, the PlayStation 2 would like a word?

    magic_lobster_party ,

    Not true 128 bit. It has 128 bit SIMD capabilities, but that’s about it. Probably mostly because of marketing reasons to show how much better it is than N64 (which also is “64 bit” for marketing reasons).

    In that case, we’re having 512 bit computers now: https://en.wikipedia.org/wiki/AVX-512

    djehuti ,

    We do. Next question.

    kilgore_trout ,
    @kilgore_trout@feddit.it avatar

    Why are we not using them in end-user devices

    ms_lane ,

    We are.

    Addressing-wise, no we don't have consumer level 128bit CPUs and probably won't ever need them.

    Instructions though, SSE had some 128bit ops (OR/XOR, MOVE) and AVX is 128bit vector math. AVX2 is 256bit vector math, AVX512 is- you guessed it 512bit vector math. AltiVec on PPC had 128bit vectors 20 years ago.

    dlundh ,
    stembolts ,

    So cool.

    DAMunzy ,

    Was that a marketing thing? Because the SH-4 was only 32-bit AFAIK.

    magic_lobster_party ,

    Only thing I can find is that it has 128-bit graphics-oriented floating-point unit delivering 1.4 GFLOPS.

    Probably only for marketing reasons. Everyone was desperate not to be worse than N64.

    dlundh ,

    Yes.

    mox ,

    John Mashey wrote about this nearly 30 years ago. This Usenet thread is worth a read.

    Mio ,

    Would it be a downside? Slower? Very costly?

    Mihies ,

    Yes

    addie ,
    @addie@feddit.uk avatar

    If you made memory access lines twice as wide, they'd take up more space. More space means (a) chips run slower, because it takes time for the electricity to get there (b) they'd be bigger and more expensive.

    The main problem with 32-bit, as others have noticed, is that that's not really so much RAM. CPUs do addition and subtraction the way we were taught at school - 'carry the one', they've an overflow bit that's set when your sum doesn't fit in the columns. On 8-bit CPUs, we were always checking back when adding up large numbers. On 64-bit CPUs, we can deal with truly massive numbers anyway, it's not such a hassle. And they're so fast at doing sums anyway and usually waiting for memory, it's barely a hassle.

    Moving to 128-bit would give us a truly minuscule, probably unmeasurable, benefit in exchange for significant downsides. We could make them, but it would be pointless.

    magic_lobster_party ,

    More complexity with barely any (practical) benefits for consumers.

    hades ,

    We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don't we drive eight-wheel vehicles?

    borari ,
    @borari@lemmy.dbzer0.com avatar

    Some of us drive 18-wheeled vehicles.

    IphtashuFitz ,
    profdc9 ,

    That's a lot of wheels. I'd hate to have to inflate all those tires.

    kayazere ,

    Funny how we are moving back to bicycles, as cars aren’t scalable solution.

    Surreal ,

    Bus is, though

    bitwaba ,

    More wheels!

    SaltySalamander ,
    @SaltySalamander@fedia.io avatar

    But we aren't really.

    sensiblepuffin ,
    @sensiblepuffin@lemmy.world avatar

    We should be.

    robotica ,

    They serve different purposes, what's wrong with having both bikes and cars? People live outside of cities too, you know

    sensiblepuffin ,
    @sensiblepuffin@lemmy.world avatar

    Moving towards something doesn't imply that cars are being obliterated or banned. Funnily enough, whenever I'm in an unpleasant altercation with a driver, they tend to have plates from far away. If they want to drive their cars out in the country, they can do so - when you're in a dense urban area (which is the most sustainable way to arrange people), you can park and get on the subway with the rest of us.

    SaltySalamander ,
    @SaltySalamander@fedia.io avatar

    I couldn't if I wanted to. I live 30 miles from my work. No thank you.

    Fal ,
    @Fal@yiffit.net avatar
    TonyTonyChopper ,

    Lobbying by the auto corporations obviously. More wheels is more better

    https://upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Series-E235-0_9.jpg/1280px-Series-E235-0_9.jpg

    CptEnder ,

    So VM? Actually makes sense.

    VeganCheesecake ,
    @VeganCheesecake@lemmy.blahaj.zone avatar

    Huh, I've been in that train. Sudden, random hit of Nostalgia.

    randombullet ,

    I mean we do right?

    Trains are typically 2 x 4 bogies.

    But then high speed rail have fewer wheels due to friction.

    Liz ,

    See here's where this analogy is perfect. Sometimes a bicycle is the best solution, just like how sometimes a microcontroller is the best solution. You use the tool you need for the job, and American product design is creating way too many "smart" products just like how American town planning demands too many cars. Bring back the microcontroller! Bring back the bike!

    hades ,
    Etterra ,

    Okay, so why can't we just not use exponentially growing values? Like 96 bit (64 + 36). I'd the something intrinsic about the size increases that they HAVE to be exponential? Why not linear scaling? 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, etc.

    SorteKanin ,
    @SorteKanin@feddit.dk avatar

    Because CPU registers are all powers of 2, i.e. exponential in this fashion. And it's also just the same reason - 64 is high enough, why go to 96 or 80 or something?

    wewbull ,

    We can, but it's awkward to do so. By having everything work with powers of 2 you don't need to have everything the same size, but can still pack things in memory efficiently.

    If your registers were 48bits long, you can use it to store 6 bytes, or 3 short ints, but only one int with 16-bits going unused. If they are powers of two in size, you can always fit smaller things in them with no wasted space.

    asmoranomar ,

    A better example is to explain the chaos of having to go to the grocery store and pick up some hot dogs and buns. You know the pain.

    friend_of_satan ,

    In binary, when you add one more numeric place, things double. Not doubling would be like having two digit decimal numbers but only allowing people to count to 50.

    vane ,

    tell that to playstation2 owners

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines