M1 Apple Silicon

Discussion in 'Apple/iOS' started by Kumabjorn, Nov 10, 2020.

  1. Eltos

    Eltos Scribbler - Standard Member

    Messages:
    123
    Likes Received:
    156
    Trophy Points:
    56
    I mean, most people aren't buying the new MBPs for gaming, but that is pretty depressing given the cost of the things.
     
    sonichedgehog360 likes this.
  2. dellaster

    dellaster Creatively Talentless Senior Member

    Messages:
    2,446
    Likes Received:
    2,753
    Trophy Points:
    181
    Only RTX 3060 levels of performance (at half the power consumption) with integrated graphics? If I had written that sentence two years ago everyone would think I was either insane or making a bad joke. :rolleyes:
     
    Eltos, JoeS, bloodycape and 3 others like this.
  3. darkmagistric

    darkmagistric Pen Pro - Senior Member Senior Member

    Messages:
    3,480
    Likes Received:
    2,846
    Trophy Points:
    231
    For how much power it sips, even RTX 3060 levels are quite impressive, especially since it's still running under the emulation. I have little doubt if Shadow of the Tomb Raider was recompiled and coded for M1 Max, it would likely give the 3080 a runs for its money. But now that the Performance M1's are out, as much as gaming will still be a pipe dream, the real test will be when M1 builds of Adobe Premier, AfterEffects, Maya, 3DStudio Max start to hit.

    Also, I've been somewhat of a proponent of smaller form factor devices. While I don't have much need to purchase these new Pros with my M1 air still more then suitable, I was looking for a desktop replacement with more then what the M1 Mac Mini has. Given the overall size of the M1 Max and M1 Pro, naturally I could see those translating quite nicely into a Mac Mini form factor, which if it could even hit 3060 graphics, would be impressive size wise.
     
    Last edited: Oct 25, 2021
  4. sonichedgehog360

    sonichedgehog360 New forum: bit.ly/newTPCR Senior Member

    Messages:
    2,875
    Likes Received:
    3,000
    Trophy Points:
    181
    I would have to strongly disagree here. As I stated, bear in mind that the CPU (and therefore, the X86-ARM translation) is not the weakest link here though. That is evidenced in the near linear performance scaling from the M1 Pro to the M1 Max. If the CPU were the weakest link and we were CPU bound, we would see little change in frame rates between the M1 Pro and M1 Max processors. That isn't the case here where we are seeing, as expected, doubled performance in moving from the M1 Pro to the M1 Max doubling in GPU silicon resources.
     
    Last edited: Oct 25, 2021
  5. sonichedgehog360

    sonichedgehog360 New forum: bit.ly/newTPCR Senior Member

    Messages:
    2,875
    Likes Received:
    3,000
    Trophy Points:
    181
    Well, console makers have used single die SoCs meaning a unified die for both GPU and CPU for around a decade now so it's not unprecedented. This is just the first instance of a computer manufacturer who has made a single die SoC targeting this performance tier (high-end/midrange instead of low-end/entry-level).

    Also bear in mind that much of Apple's lead is a product of them paying for the latest and greatest node. Ian Cutress of AnandTech was expecting RTX 3060 performance (not RTX 3080) some weeks ago. I have come less and less to fancy Andrei Frumusanu who can be a bit of an Apple fanboy at times.

    upload_2021-10-25_12-52-30.png

    https://twitter.com/IanCutress/status/1450182043367841797
     
    Last edited: Oct 25, 2021
    JoeS likes this.
  6. Azzart

    Azzart Late night illustrator Senior Member

    Messages:
    2,343
    Likes Received:
    1,729
    Trophy Points:
    181
    ok, but that is still running in a translated environment. Now imagine the computer launching a gpu enhanced rendering in Blender using a M1 version of the software: it’s a beast in that form factor and with that little power consumption.
    even if I have no use for it.
     
  7. dellaster

    dellaster Creatively Talentless Senior Member

    Messages:
    2,446
    Likes Received:
    2,753
    Trophy Points:
    181
    At quite higher power consumption numbers, one must point out. (Which is why I don't have a console.) Regardless, as a power-constrained gamer (solar-only) the M1 Max makes me optimistic for my future gaming options. I congratulate Apple, a company I don't like and won't buy from anymore, because they deserve it. There's no call to downplay such achievements in my opinion.

    Because Apple has done it, all the others will have to put in the effort to match them. They can't just phone it in as Intel did most of last decade. This is a good thing for everyone!
     
  8. desertlap

    desertlap Pen Pro - Senior Member Senior Member

    Messages:
    4,156
    Likes Received:
    6,018
    Trophy Points:
    231
    Yes but gaming is still a niche, and I'm waiting to see what the results are with native creative pro type apps more fully using metal.

    Apple even with Intel was using Radeon chips that were more tuned to things like Adobe creativity and of course there own Pro apps like Logic and Final cut than they were games.

    We just started our testing, but one of our apps that is fully native and also uses Metal FREAKING FLIES on the 16; nearly 2.65x the Intel I9 MB Pro 16.

    Of course that is only one app, and ours to boot, but that's also without any optimization either.

    Two other initial observations,
    Both displays are superb and the 16 is a significant step up from the Intel MB Pro 16 which was already pretty great.

    Second, the 14 has appeal to me (and I suspect a lot of others too) as arguably the smallest "full powered" laptop in the market. Many like me who would prefer a smaller system but often need the grunt of a bigger system might not need to make that trade off anymore.

    PS: The performance per watt of these systems is just nuts (and where Intel and even AMD have a lot of catching up to do IMHO). When we ran the first pass of our battery life torture test, we thought we'd screwed up the test.

    PPS: Possibly because most of my folks are iPhone users, and I was too until fairly recently, but the notch is a non-issue about 15 minutes after you start using them.
     
  9. sonichedgehog360

    sonichedgehog360 New forum: bit.ly/newTPCR Senior Member

    Messages:
    2,875
    Likes Received:
    3,000
    Trophy Points:
    181
    Well, a large part of it is from Apple just paying for first dibs on the world's best node, being two nodes ahead of NVIDIA's Samsung 8nm process for their desktop Ampere products. While microarchitectural design is one major of the formula, the other heavier side of the coin is you are always constrained by the proceed node. You wouldn't see them doing this well on 10nm or 14nm process. Plus the increasingly complex microarchitectural designs simply couldn't operate stably on a larger node. Signals and systems inform us that longer, noiser traces require increasing in-design latency targets to maintain an in-sync stable signal. That is why, for example, you wouldn't see Alder Lake on a 45nm process. When Intel backported Ice Lake to 14nm with Rocket Lake, they got terrible latency numbers for precisely that reason. Process node delays woes is precisely why Intel is in the world of hurt they are right now. You cannot have the microarchitecture without the node.
     
    Last edited: Oct 25, 2021
  10. darkmagistric

    darkmagistric Pen Pro - Senior Member Senior Member

    Messages:
    3,480
    Likes Received:
    2,846
    Trophy Points:
    231
    Ever try running Daz3d on a AMD graphics card?

    It's my one misgiving of moving my desktop to the Vega Graphics Hades Canyon NUC. 3D doesn't always translate into simple performance for performance, and for programs built around nVidia's Iray, as Daz3d is, some things just won't work on different architecture. As is, the M1 Max's graphics are essentially integrated so the whole CPU/GPU combo is running under the emulation, which to this point, no game has been developed with this specific graphics architecture in mind.. Even if it works and scales somewhat predicability up in relation to the graphics cores, there could be other reasons for that. Maybe the Rosetta 2 emulation has a bottleneck for how much it can translate from each graphic core? Until we can compare a M1 Max game to a Rossetta Emulated game, we really don't know how much of performance hit the emulation is imposing.
     
Loading...

Share This Page