"These price increases have multiple intertwining causes, some direct and some less so: inflation, pandemic-era supply crunches, the unpredictable trade policies of the Trump administration, and a gradual shift among console makers away from selling hardware at a loss or breaking even in the hopes that game sales will subsidize the hardware. And you never want to rule out good old shareholder-prioritizing corporate greed.

But one major factor, both in the price increases and in the reduction in drastic “slim”-style redesigns, is technical: the death of Moore’s Law and a noticeable slowdown in the rate at which processors and graphics chips can improve."

  • tomalley8342@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    edit-2
    1 day ago

    2060 super for 300, and then another 200 for a decent processor puts you ahead of a ps5 and for a comparable price.

    you’re going to have to really scrunge up for deals in order to get psu, storage, memory, motherboard, and a case for your remaining budget of $0.

    https://pcpartpicker.com/list/zYGmJn here is a mid tier build for 850

    This is $150 more expensive and the gpu is half as performant as the reported PS5 pro equivalent.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      edit-2
      22 hours ago

      Ok so, for starters, your ‘reported equivalent’ source is wrong.

      https://www.eurogamer.net/digitalfoundry-2024-playstation-5-pro-weve-removed-it-from-its-box-and-theres-new-information-to-share

      The custom AMD Zen2 APU (combined CPU + GPU, as is done in laptops) of a PS5Pro is 16.7 TFLOPs, not 33.

      So your PS5 Pro is actually roughly equivalent to that posted build… by your ‘methodology’, which is utterly unclear to me, what your actual methodolgy for doing a performance comparison is.

      The PS5 Pro uses 2 GB of DDR5 RAM, and 16 GB of GDDR6 RAM.

      This is… wildly outside of the realm of being directly comparable to a normal desktop PC, which … bare minimum these days, has 16 GB DDR4/5 RAM, and the GDDR6 RAM would be part of the detachable GPU board itself, and would be … between 8GB … and all the way up to 32 if you get an Nvidia 5090, but consensus seems to be that 16 GB GDDR6/7 is probably what you want as a minimum, unless you want to be very reliant on AI upscaling/framegen, and the input lag and whatnot that comes with using that on an underpowered GPU.

      Short version: The PS5Pro would be a wildly lopsided, nonsensical architecture to try to one to one replicate in a desktop PC… 2 GB system RAM will run lightweight linux os’s, but not a chance in hell you could run Windows 10 or 11 on that.

      Fuck, even getting 7 to work with 2GB RAM would be quite a challenge… if not impossible, I think 7 required 4GB RAM minimum?

      The closest AMD chip to the PS5 Pro that I see, in terms of TFLOP output… is the Radeon 7600 Mobile.

      ((… This is probably why Cyberpunk 2077 did not (and will never) get a ‘performance patch’ for the PS5Pro: CP77 can only pull both high (by console standards) framerates at high resolutions… and raytracing/path tracing… on Nvidia mobile class hardware, which the PS5Pro doesn’t use.))

      But, lets use the PS5Pro’s ability to run CP77 at 2K60fps on … what PC players recognize as a mix of medium and high settings… as our benchmark for a comparable standard PC build. Lets be nice and just say its the high preset.

      (a bunch of web searching and performance comparisons later…)

      Well… actually, the problem is that basically, nobody makes or sells desktop GPUs that are so underpowered anymore, you’d have to go to the used market or find some old unpurchased stock someone has had lying around for years.

      The RX 6600 in the partpicker list is fairly close in terms of GPU performance.

      Maybe pair it with an AMD 5600X processor if you… can find one? Or a 4800S, which supposedly actually were just rejects/run off from the PS5 and Xbox X and S chips, rofl?

      Yeah, legitimately, the problem with trying to make a PC … in 2025, to the performance specs of a PS5 Pro… is that basically the bare minimum models for current and last gen, standard PC architecture… yeah they just don’t even make hardware that weak anymore.

      EDIT:

      oh final addendum: if your tv has an hdmi port, kablamo, thats your monitor, you dont strictly need a new one.

      And there are also many ways to get a wireless or wired console style controller to work in a couch pc setup.

      • tomalley8342@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 hours ago

        Short version: The PS5Pro would be a wildly lopsided, nonsensical architecture to try to one to one replicate in a desktop PC… 2 GB system RAM will run lightweight linux os’s, but not a chance in hell you could run Windows 10 or 11 on that.

        Fuck, even getting 7 to work with 2GB RAM would be quite a challenge… if not impossible, I think 7 required 4GB RAM minimum?

        It’s shared memory, so you would need to guarantee access to 16gb on both ends.

        The RX 6600 in the partpicker list is fairly close in terms of GPU performance.

        I don’t know how you could arrive at such a conclusion, considering that the base PS5 has been measured to be comparable to the 6700.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          It’s shared memory, so you would need to guarantee access to 16gb on both ends.

          So… standard Desktop CPUs can only talk to DDR.

          ‘CPUs’ can only utilize GDDR when they are actually a part of an APU.

          Standard desktop GPUs can only talk to GDDR, which is part of their whole seperate board.

          GPU and CPU can talk to each other, via the mainboard.

          Standard desktop PC architecture does not have a way for the CPU to directly utilize the GDDR RAM on the standalone GPU.

          In many laptops and phones, a different architecture is used, which uses LPDDR RAM, and all the LPDDR RAM is used by the APU, the APU being a CPU+GPU combo in a single chip.

          Some laptops use DDR RAM, but… in those laptops, the DDR RAM is only used by the CPU, and those laptops have a seperate GPU chip, which has its own built in GDDR RAM… the CPU and GPU cannot and do not share these distinct kinds of RAM.

          (Laptop DDR RAM is also usually a different pin count and form factor than desktop PC DDR RAM, you usually can’t swap RAM sticks between them.)

          The PS5Pro appears to have yet another unique architecture:

          Functionally, the 2GB of DDR RAM can only be accessed by the CPU parts of the APU, which act as a kind of reserve, a minimum baseline of CPU-only RAM set aside for certain CPU specific tasks.

          The PS5Pro’s 16 GB of GDDR RAM is sharable and usable by both the CPU and GPU components of the APU.

          So… saying that you want to have a standard desktop PC build… that shares all of its GDDR and DDR RAM… this is impossible, and nonsensical.

          Standard desktop PC motherboards, compatible GPUs and CPUs… they do not allow for shareable RAM, instead going with a design paradigm of the GPU has its own onboard GDDR RAM that only it can use, and DDR RAM that only the CPU can use.

          You would basically have to tear a high end/more modern laptop board with an APU soldered into it… and then install that into a ‘desktop pc’ case… to have a ‘desktop pc’ that shares memory between its CPU and GPU components… which both would be encapsulated in a single APU chip.

          Roughly this concept being done is generally called a MiniPC, and is a fairly niche thing, and is not the kind of thing an average prosumer can assemble themselves like a normal desktop PC.

          All you can really do is swap out the RAM (if it isnt soldered) and the SSD… maybe I guess transplant it and the power supply into another case?

          I don’t know how you could arrive at such a conclusion, considering that the base PS5 has been measured to be comparable to the 6700.

          I can arrive at that conclusion because I can compare actual bench mark scores from a nearest TFLOP equivalent, more publically documented, architecturally similar AMD APU… the 7600M. I specifically mentioned this in my post.

          This guy in the article here … well he notes that the 6700 is a bit more powerful than the PS5Pro’s GPU component.

          The 6600 is one step down in terms of mainline desktop PC hardware, and arguably the PS5Pro’s performance is… a bit better than a 6600, a bit worse than a 6700, but at that level, all of the other differences in the PS5Pro’s architecture give basically a margin of error when trying to precisely dial in whether a 6700 or 6600 is a closer match.

          You can’t do apples to apples spec sheet comparisons… because, as I have now exhaustively explained:

          Standard desktop PCs do not share RAM between the GPU and CPU. They also do not share memory imterface busses and bandwidth lanes… in standard PCs, these are distinct and seperate, because they use different architectures.

          I got my results by starting with the (correct*) TFLOPs output from a PS5Pro, finding a nearest equivalent APU with PassMark benchmark scores, reported by hundreds or thousands or tens of thousands of users, then compared those PassMark APU scores to PassMark conventional GPU scores, and ended up with ‘fairly close’ to an RX 6600.

          • The early, erroneous reporting of the TFLOPs score as roughly 33, when it was actually closer to 16 or 17… that stemmed from reporting a 16 digit FLOP score/test, when the more standard convention is to list the 32 digit FLOP score/test.

          You, on the other hand, just linked to a Tom’s Hardware review of currently in production desktop PC GPUs… which did not make any mention of the PS5Pro… and them you also acted as if a 6600 was half as powerful as a PS5Pro’s GPU component… which is wildly off.

          A 6700 is nowhere near 2x as powerful as a 6600.

          2x as poweful as an AMD RX 6600… would be roughly an AMD RX 7900 XTX, the literal top end card of AMDs previous GPU generation… that is currently selling for something like $1250 +/- $200, depending on which retailer you look at, and their current stock levels, and which variant of which partner mfg you’re going for.

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            13 hours ago

            Just to add to this, the reason you only see shared memory setups on PCs with integrated graphics is because it lowers performance compared to dedicated memory, which is less of a problem if your GPU is only being used in 2D mode such as when doing office work (mainly because that uses little memory), but more of a problem when used in 3D mode (such as in most modern games) which is as the PS5 is meant to be used most of the time.

            So the PS5 having shared memory is not a good thing and actually makes it inferior compared to a PC made with a GPU and CPU of similar processing power using the dominant gaming PC architecture (separate memory).

            • addie@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              You’ve got that a bit backwards. Integrated memory on a desktop computer is more “partitioned” than shared - there’s a chunk for the CPU and a chunk for the GPU, and it’s usually quite slow memory by the standards of graphics cards. The integrated memory on a console is completely shared, and very fast. The GPU works at its full speed, and the CPU is able to do a couple of things that are impossible to do with good performance on a desktop computer:

              • load and manipulate models which are then directly accessible by the GPU. When loading models, there’s no need to read them from disk into the CPU memory and then copy them onto the GPU - they’re just loaded and accessible.
              • manipulate the frame buffer using the CPU. Often used for tone mapping and things like that, and a nightmare for emulator writers. Something like RPCS3 emulating Dark Souls has to turn this off; a real PS3 can just read and adjust the output using the CPU with no frame hit, but a desktop would need to copy the frame from the GPU to main memory, adjust it, and copy it back, which would kill performance.
              • sp3ctr4l@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                I… uh… what?

                Integrated memory, on a desktop PC?

                Genuinely: What are you talking about?

                Typical PCs (and still many laptops)… have a CPU that uses the DDR RAM that is… plugged into the Mobo, and can be removed. Even many laptops allow the DDR RAM to be removed and replaced, though working on a laptop can often be much, much more finnicky.

                GPUs have their own GDDR RAM, either built into the whole AIB in a desktop, or inside of or otherwise a part of a laptop GPU chip itself.

                These are totally different kinds of RAM, they are accessed via distinct busses, they are not shared, they are not partitioned, not on desktop PCs and most laptops.

                They are physically and design distinct, set aside, and specialized to perform with their respective processor.

                The kind of RAM you are talking about, that is shared, partitioned, is LPDDR RAM… and is incompatible with 99% of desktop PCs

                Also… anything, on a desktop PC, that gets loaded and processed by the GPU… does at some point, have to go through the CPU and its DDR RAM first.

                The CPU governs the actual instructions to, and output from, the GPU.

                A GPU on its own cannot like, ask an SSD or HDD for a texture or 3d model or shader.

                Normally, compressed game assets are loaded from the SSD to RAM via the Win32 API. Once in RAM, the CPU then decompresses those assets. The decompressed game assets are then moved from RAM to the graphics card’s VRAM (ie, GDDR RAM), priming the assets for use in games proper.

                (addition to the quote is mine)

                Like… there is GPU Direct Storage… but basically nothing actually uses this.

                https://www.pcworld.com/article/2609584/what-happened-to-directstorage-why-dont-more-pc-games-use-it.html

                Maybe it’ll take off someday, maybe not.

                Nobody does dual GPU SLI anymore, but I also remember back when people thought multithreading and multicore CPUs would never take off, because coding for multiple threads is too haaaaarrrrd, lol.

                Anyway, the reason that emulators have problems doing the things you describe consoles a good at… is because consoles have finetuned drivers that work with only a specific set of hardware, and emulators have to reverse engineer ways of doing the same, which will work on all possible pc hardware configurations.

                People who make emulators generally do not have direct access to the actual proprietary driver code used by console hardware.

                If they did, they would much, much more easily be able to… emulate… similar calls and instruction sets on other PC hardware.

                But they usually just have to make this shit up on the fly, with no actual knowledge of how the actual console drivers do it.

                Reverse engineering is astonishingly more difficult when you don’t have the source code, the proverbial instruction manual.

                Its not that desktop PC architecture … just literally cannot do it.

                If that were the case, all the same issues you bring up that are specific to emulators… would also be present with console games that have proper ports to PC.

                While occasionally yes, this is sometimes the case, for some specific games with poor quality ports… generally no, not this is not true.

                Try running say, an emulated Xbox version of Deus Ex: Invisible war, a game notoriously handicapped by its console centric design… try comparing the PC version of that, on a PC… to that same game, but emulating the Xbox version, on the same exact PC.

                You will almost certainly, for almost every console game with a PC port… find that the proper PC version runs better, often much, much better.

                The problem isn’t the PC’s hardware capabilities.

                The problem is that emulation is inefficient guesswork.

                Like, no shade at emulator developers whatsoever, its a miracle any of that shit works at all, reverse engineering is astonishingly difficult, but yeah, reverse engineering driver or lower level code, without any documentation or source code, is gonna be a bunch of bullshit hacks that happen to not make your PC instantly explode, lol.

              • Aceticon@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                When two processing devices try and access the same memory there are contention problems as the memory cannot be accessed by two devices at the same time (well, sorta: parallel reads are fine, it’s when one side is writing that there can be problems), so one of the devices has to wait, so it’s slower than dedicated memory but the slowness is not constant since it depends on the memory access patterns of both devices.

                There are ways to improve this: for example, if you have multiple channels on the same memory module then contention issues are reduced to the same memory block, which depends on the block-size, though this also means that parallel processing on the same device - i.e. multiple cores - cannot use the channels being used by a different device so it’s slower.

                There are also additional problems with things like memory caches in the CPU and GPU - if an area of memory cached in one device is altered by a different device that has to be detected and the cache entry removed or marked as dirty. Again, this reduces performance versus situations where there aren’t multiple processing devices sharing memory.

                In practice the performance impact is highly dependent on if an how the memory is partitioned between the devices, as well as by the amount of parallelism in both processing devices (this latter because of my point from above that memory modules have a limited number of memory channels so multiple parallel accesses to the same memory module from both devices can lead to stalls in cores of one or both devices since not enough channels are available for both).

                As for the examples you gave, they’re not exactly great:

                • First, when loading models into the GPU memory, even with SSDs the disk read is by far the slowest part and hence the bottleneck, so as long as things are being done in parallel (i.e. whilst the data is loaded from disk to CPU memory, already loaded data is also being copied from CPU memory to GPU memory) you won’t see that much difference between loading to CPU memory and then from there to GPU memory and direct loading to GPU memory. Further, the manipulation of models in shared memory by the CPU introduces the very performance problems I was explaining above, namely contention problems from both devices accessing the same memory blocks and GPU cache entries getting invalidated because the CPU altered that data in the main memory.
                • Second, if I’m not mistaken tone mapping is highly parallelizable (as pixels are independent - I think, but not sure since I haven’t actually implemented this kind of post processing), which means that the best by far device at parallel processing - the GPU - should be handling it in a shader, not the CPU. (Mind you, I might be wrong in this specific case if the algorithm is not highly parallelizable. My own experience with doing things via CPU or via shaders running in the GPU - be it image shaders or compute shaders - is that in highly parallelizable stuff, a shader in the GPU is way, way faster than an algorithm running in the CPU).

                I don’t think that direct access by the CPU to manipulate GPU data is at all a good thing (by the reasons given on top) and to get proper performance out of a shared memory setup at the very least the programming must done in a special way that tries to reduce collisions in memory access, or the whole thing must be setup by the OS like it’s done on PCs with integrated graphics, were a part of the main memory is reserved for the GPU by the OS itself when it starts and the CPU won’t touch that memory after that.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              10 hours ago

              Basically this is true, yes, without going into an exhaustive level of detail as to very, very specific subtypes and specs of different RAM and mobo layouts.

              Shared memory setups generally are less powerful, but, they also usually end up being overall cheaper, as well as having a lower power draw… and being cooler, temperature wise.

              Which are all legitimate reasons those kinds of setups are used in smaller form factor ‘computing devices’, because heat managment, airflow requirements… basically rule out using a traditional architecture.

              Though, recently, MiniPCs are starting to take off… and I am actually considering doing a build based on the Minisforum BD795i SE… which could be quite a powerful workstation/gaming rig.

              Aside about interesting non standard 'desktop' potential build

              This is a Mobo with a high end integrated AMD mobile CPU (7945hx)… that all together, costs about $430.

              And the CPU in this thing… has a PassMark score… of about the same as an AMD 9900X… which itself, the CPU alone, MSRPs for about $400.

              So that is kind of bonkers, get a high end Mobo and CPU… for the price of a high end CPU.

              Oh, I forgot to mention: This BD795iSE board?

              Yeah it just has a standard PCI 16 slot. So… you can plug in any 2 slot width standard desktop GPU into it… and all of this either literally is, or basically is the ITX form factor.

              So, you could make a whole build out of this that would be ITX form factor, and also absurdly powerful, or a budget version with a dinky GPU.

              I was talking in another thread a few days ago, snd somekne said PC architecture may be headed toward… basically you have the entire PC, and the GPU, and thats the new paradigm, instead of the old school view of: you have a mobo, and you pick it based on its capability to support future cpus in the same socket type, future ram upgrades, etc…

              And this intrigued me, I looked into it, and yeah, this concept does have cost per performance merit at this point.

              So this uses a split between the GPU having its GDDR RAM and the… CPU using DDDR SODIMM (laptop form factor) RAM.

              But its also designed such that you can actually fit huge standard PC style cooling fans… into quite a compact form factor.

              From what I can vaguely tell as a non Chinese speaker… it seems like there are many more people over in China who have been making high end, custom, desktop gaming rigs out of this laptop/mobile style architecture for a decent while now, and only recently has this concept even really entered into the English speaking world/market, that you can actually build your own rig this way.

              • Lka1988@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 hours ago

                Fascinating discourse here. Love it.

                What about a Framework laptop motherboard in a mini PC case? Do they ship with AMD APUs equivalent to that?

                • sp3ctr4l@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 hours ago

                  Hrm uh… Framework laptops… seem to be configurable as having a mobile grade CPU with integrated graphics… and also an optional, additional mobile grade, dedicated GPU.

                  So, not really an APU… unless you really want to haggle over definitions and say ‘technically, a CPU with pathetic integrated graphics still counts as a GPU and is thus an APU’.

                  Framework laptop boards don’t have the PCI-E 16x slot for a traditional desktop GPU. As far as I am aware, Minisforum are the only people that do that, along with a high powered mobile CPU.

                  Note that the Minisforum Mobo model I am talking about, the AMD chip is not really an APU, its also a CPU with integrated graphics. Its a Radeon 610M, basically the bare minimum to be able to render and output very basic 2d graphics.

                  True APUs are … things like what more modern consoles use, what a steam deck uses. They are still usually custom specs, proprietary to their vendor.

                  The Switch 2 will have a custom Nvidia APU, which is the first Nvidia APU of note to my knowledge, and it will be very interesting to learn more about it from teardowns and benchmarks.

                  Currently, the most powerful, non custom, generally publically available, compatible with standard PC mobos… arguably an APU, arguably not… is the AMD 8700G.

                  Its about $315 bucks, is a pretty decent CPU, but as a GPU… its less powerful than a standard desktop RX 6500 from AMD… which is the absolute lowest tier AMD GPU from now two generations back from current.

                  You… might be able to run … basically games older than 5ish years, at 1080p, medium graphics, at 60fps. I guess it would maybe be a decent option if you… wanted to build a console emulator machine, roughly for consoles … N64/PS1/Dreamcast, and older, as well as being able to play older PC games, or PC games at lower settings/no more than 1080p.

                  I am totally just spitballing with that though, trying to figure out all that exactly would be quite complicated.

                  But now, back to Framework.

                  Framework is soon to be releasing the Framework Desktop.

                  This is a small form factor PC… which uses an actual proper APU, either the AMD AI Max 385 or 395.

                  Its listed as MSRP of $1100, they say it can run Cyberpunk at 1440p on high settings at about 75 fps… thats with no ray tracing, no framegen… and I think also no frame upscaling being used.

                  So, presumably, if you turned on upscaling and framegen, you’d be able to get similar fps at ultra and psycho settings, and/or some amount of raytracing.

                  There are also other companies that offer this kind of true APU, MiniPC style architecture, such as EvoTek, though it seems like most of them are considerably more expensive.

                  https://wccftech.com/amd-ryzen-ai-max-395-strix-halo-mini-pc-tested-powerful-apu-up-to-140w-power-128-gb-variable-memory-igpu/

                  … And finally, looks like Minisforum is sticking with the laptop CPU + desktop GPU design, and is soon going to be offering even more powerful CPU+Mobo models.

                  https://wccftech.com/minisforum-ryzen-9-9955hx-x870m-motd-motherboard-9955hx-ms-a2-mini-pc-strix-nas/

                  So yeah, this is actually quite an interesting time of diversification away from … what have basically been standard desktop mobo architectures… for … 2, 3? decades…

                  …shame it all also coincides with Trump throwing a literally historically unprecedented senilic temper tantrum, and fucking up prices and logistics for… basically the whole world, though of course much, much more seriously for the US.