Surface Go 2 Discussion Thread (May 2020)

Discussion in 'Microsoft' started by JoeS, May 6, 2020.

  1. violajack

    violajack Scribbler - Standard Member

    Messages:
    415
    Likes Received:
    461
    Trophy Points:
    76
    To be fair, the computer is older than he is. So early 2000's, 1960's, what's the difference to him?
     
    nnthemperor and JoeS like this.
  2. dstrauss

    dstrauss Comic Relief Senior Member

    Messages:
    10,882
    Likes Received:
    9,427
    Trophy Points:
    331
    I used to own a 1966 Mustang. My middle son (now 38) was riding in it with me to school (1st grade) and I stopped at a traffic light - he looked at me and asked "When you buy an old car does it always play old music?" and I finally realized he was referring to the original AM radio in the car which was tuned to our local oldies station...
     
  3. lafester

    lafester Scribbler - Standard Member

    Messages:
    140
    Likes Received:
    49
    Trophy Points:
    41
    Just got mine today and it is pretty nice. Easy to type on, seems solid and I can finally use my go on my lap. If only ms would make these and add in a second battery....

    I found trackpad settings and am able to adjust the curser speed, but I agree that the stock setting is about right. Mouse settings do nothing as the reviewer discovered.


     
  4. Bergman

    Bergman Scribbler - Standard Member

    Messages:
    116
    Likes Received:
    143
    Trophy Points:
    56
    totally with you here. what I think would be a serious win for MS is to lead with the low end now with the new laptop and turn the GO3 in to a more serious business machine. I am not looking for a low cost device here so up the RAM, processor, make the screen a tad larger still and give is they Brydge keyboard that attaches to the bottom port so its not Bluetooth and put a second battery in there. If we could get 6ish hours out of the batetry in the tablet but had a second battery in the keyboard to add another 2 hours when docked I would think that would be a very solid business laptop.
     
  5. desertlap

    desertlap Scribbler - Standard Member Senior Member

    Messages:
    2,626
    Likes Received:
    3,295
    Trophy Points:
    181
    Yeah, not going to happen for multiple reasons.

    1. MS is, far more than many except perhaps Apple, careful about differentiating their product line. eg. If a specific feature is crucial in your buying decision, then here is the model(s) feature it. It's why the Surface Book clipboard doesnt have a kickstand and conversely the Pro doesn't have a full on keyboard type dock with additional functionality such as dedicated graphics in it.

    2. MS has repeatedly told us and our corporate customers that if use as a tablet is the most important function, the best experience is with the Go line as the Pro is likely too big for many users

    And there is anecdotal evidence on the Apple side, with keyboard sales take-up being nearly 90% with the pro 12.9. Where as it drops off significantly with the 11 pro and even more so with the 10.2. That's not too say that they don't sell and promote the keyboard with the 11 and 10.2, just that the take up rate is lower. We've heard around the 50% range for the 11 pro and under 30% with the 10.2. And why they don't even bother for the mini.

    Now before you dismiss that, consider that Apple sells orders of magnitudes more tablets than MS.

    3. You still have the issues of a very tightly packed housing with the Go and thus the associated thermal constraints as well as the larger overall motherboard/chip set size with intel chips. Even putting something like the chipset in the Go laptop probably is both too big and too hot.

    4. So that leads to where they go with the Go. Since again MS sees the Go as their best tablet and thus maximum mobility solution they will do likely one of three things.

    a. They will just iterate with 11th or 12th gen Pentium and core M chips next year which is the safe/easy choice and will likely provide at least a mild performance bump. I'd rate that scenario as most likely

    b. They will switch to the new low power Ryzen's in the pipeline. The Ryzen's show real promise from a performance per watt basis and could lead either to better battery life, if they are calibrated towards that, or some performance boost. The issue at this point is that, despite all of the many perks of Ryzen they still run hotter than the corresponding intel chip sets. I'd rate that the least likely scenario , though still possible.

    c. They switch the Go line to WOA based. This one is harder to quantify. As anyone that has used a Pro X can tell you, when using NATIVE apps, the Pro is both very fast and gets great battery life. And with excellent LTE (cellular) as a bonus which would also benefit a future Go.

    Of course the current Achilles heel of the Pro X is legacy x86 support which still is a HUGE issue for many companies and users and the SQ2 in the latest Pro X seems to show only small gains with. So the X-Factor here is Windows X (pun intended). Windows X has the potential to unshackle the WOA platform by allowing best performance via native and improved performance through containerization of an X86 emulation engine.

    One related note, the Go and the Surface laptops have had some recent success in the consumer space especially the low end Go at the moment due to both the low cost, and pandemic work from home initiatives. But MS themselves sees that as a blip right now and not reflective of the broader future.

    And the new Surface Go Laptop is an interesting experiment here, with even MS seeming to be a bit uncertain on who is likely to buy it, and in what quantities, albeit with the qualifier that they are making the biggest push in education as an alternative to Chromebooks and/or iPads with keyboards.

    So of course these are my opinions and observations , obviously influenced by my job and customer base so keep that in mind. And I want to reiterate my appreciation of the Go line generally. It is both by far the best value in the Surface line and I think the best multi utility device for a significant number of users. :)

    And yet another tome to start a new week....:D
     
    Last edited: Nov 1, 2020
    nnthemperor and dstrauss like this.
  6. Kumabjorn

    Kumabjorn ***** is back Senior Member

    Messages:
    4,391
    Likes Received:
    2,548
    Trophy Points:
    231
    Never really got into the nitty gritty of CPUs, they were all Intel when I got my first PC and they loyally followed Moore's Law, so I constantly got faster and better PCs, so no reason to bury my nose in it. Nowadays we are obviously no longer into stationary computing and needs change. So bare with me for a couple of, probably, stupid questions.

    What is it in x86 code that becomes complicated on an ARM chip?

    Is that the kind of problem that Apple Silicon is trying to solve? Integrating legacy code into modern CPUs?

    In that case, isn't that what MS is trying to do with SQ1 and SQ2 in the Pro X line?

    Now that we are down to 5 nm CPU production, shouldn't it be possible to include a unit on the die that is there specifically to handle the nasty x86 code that the ARM CPUs have trouble with?

    I guess my overall curiosity boils down to if we aren't in an "in-between" state at the moment. Seems to me that MS is trying hard to move over the whole organisation to mobile solutions, but these days they are the IBM of the 80's.
     
    nnthemperor likes this.
  7. JoeS

    JoeS I'm all ears Senior Member

    Messages:
    5,565
    Likes Received:
    3,852
    Trophy Points:
    331
    Whew, now there's a big question! I'm expecting @desertlap to be writing a thesis here shortly.

    In the meantime, both types of chips handle digital info. Both chips shuffle ones and zeroes around to ultimately calculate things and display things. The two chip types have different inner workings / instruction sets. I believe the design of ARM chips and instruction handling is more geared toward efficiency. The instruction set is basically a language to talk to the chip, and these things are patented. If you want to run x86 software on an ARM chip, you're going to need a translator. Translation takes instructions, and that in itself costs energy in addition to running the code, so there's problem 1. Second is that the patents mean that building your own translator or pieces of ARM code that do Intel-like things will lead to lawsuits, so any company that tries to let x86 programs run on ARM chips has to tiptoe very carefully. As a result, the translation code (emulation) is not as efficient as it could be.

    Now, if Apple (unlike MS..) shows that they're all-in on ARM based Macbooks, developers will build their code using compiler options that generate native ARM code, i.e. it can be run efficiently on ARM, without risk of litigation (since it's made for ARM, not using anything Intel). In the case of MS, developers are holding off, because it takes effort to make sure that an ARM build of their code runs properly. Why do that if MS might throw in the towel, and when few people own Windows on ARM tablets/laptops.

    My guess is Apple has a better chance, because all the iOS stuff is already built for ARM. This means if they come out with a touch enabled macbook it will run all the iOS stuff perfectly, PLUS it will do a lot of other macOS things very well (natively compiled). I'm guessing enough people will jump on the long battery life and the 'touch mac' such that developers will have no choice to compile for ARM. Chicken and egg problem solved. It only took a decade long iOS detour. :)
     
    Last edited: Nov 1, 2020
  8. desertlap

    desertlap Scribbler - Standard Member Senior Member

    Messages:
    2,626
    Likes Received:
    3,295
    Trophy Points:
    181
    Thanks @JoeS for teeing this up for me:eek::D I'm not sure I'm up to the task though. Trying to reduce what is a complex topic (see what I did there?) down to a few paragraphs is not an easy task.

    So relative to what I think @Kumabjorn question is which is around emulation, I'll give it a shot....

    So first of all, Intel is classically a CISC (complex instruction set) architecture and is also Little-endian. ARM in this case Qualcomm is RISC (reduced instruction set) and also big-endian.

    And the simplest part first. Big Endian versus Little Endian is a major difference in the way math operations are performed. One is not inherently better or worse overall than the other, though both have their strengths and weaknesses in specific types of tasks and calculations.

    To use an analogy, if you went to college with any engineering nerds, they likely used either HP calculators or Texas Instruments calculators and were strong advocates of one and despised the other. HP uses what's called RPN (Reverse Polish notation) and TI uses classic algebraic notation. I wont go in to the details of RPN, but again for certain types of calculations it can be considerably faster than traditional algebraic.

    So in the context of the RISC versus CISC discussion, any emulation has the additional overhead of translation between little endian and big endian.

    More generally, RISC is actually the older tech strictly speaking, with increased CISC being one ofI Intel's early innovations in the PC era. And it's that history that's most relevant here.

    In the early days of sub 1GHZ, single core processors, and single thread applications and operating systems (Windows). Anything that could be done to offload to the processor, such as complex calculations like video codec decoding would produce huge performance improvements compared to relying on the application and/or the OS. Intel's MMX extensions to the x86 architecture here are absolutely amazing examples of engineering.

    But OTOH it was almost something they were forced in to by Apple with QuickTime, that ran incredibly effectively on the PowerPC chips which are early examples of RISC in the PC market

    Back to RISC, the modern implementation rests on the idea that it's more efficient to reduced the number of tasks that the processor actually performs, but perform them as fast as possible. However it then becomes more reliant on the higher level operating system and applications to parse those complex app level tasks in to instructions for the processor.

    So bear in mind that Windows is fundamentally built on the foundation of CISC architecture and thus emulation on RISC requires a very robust and thus complex underlying software architecture to essentially act as translator.

    That all being said, both architectures have moved towards each other significantly with for example Apple's A series chips having significant CISC like custom extensions and Intel's newer chips borrowing from RISC especially with things like higher order math calculations.

    Or to use another comparison, if the original 8086 chip came out today it would be grouped with the RISC crowd and Apples original A series chip would have been termed CISC chips.

    So RISC in my opinion has some significant modern advantages compare to CISC. It can scale up to higher maximum speeds than CISC and conversely can be more power efficient simply because there is less "stuff" on the chip itself. That also lends itself to shrinking the overall die size. eg. apple just released 5nm devices and Intel is stuck on 10nm

    And from an OS/App perspective, the fact that the instructions are reduced (pun intended again) makes it easier to, for example, distribute the work load to multiple cores. Or presuming a "smarter OS" even distribute the various tasks to dissimilar cores that might be optimized for them.
    And this is something that IMHO windows utterly fails at, as the initial Intel Lakefield results clearly showed and WOA generally suffers from to a lesser degree

    OTOH Apple does focus on single core performance still with their A series chips , because frankly humans generally are still learning how to multitask effectively in the context of a computing system.

    TLDR, RISC has some inherent advantages currently, but CISC did in the recent past, but it's highly possible that it could flip again with new innovations in hardware.

    Probably waaaay more than you asked, but there you go :)

    PS: My daughter would have started her third semester of study on this topic this fall if COVID hadn't intervened so that gives you an idea of how complex this topic actually is.
     
  9. Tams

    Tams Scribbler - Standard Member

    Messages:
    552
    Likes Received:
    283
    Trophy Points:
    76
    I stand to be corrected by @desertlap. Edit: Damn it! Bloody ninja'd to it while I was typing up.

    Anyhow. It comes down to x86 and ARM processors using diametrically opposing ways of computing.

    Computers work by oscillating an electrical current. One complete oscillation is one peak and one trough of a wave (returning to the middle). This is also called one clock cycle and is measured as a single Hertz (Hz).

    x86 is a CISC (complex instruction set computer). ARM (and others like MIPS and RISC V) are RISCs (reduced instruction set computer). They do what their names say. CISC instructions send more complex instructions to the processor. RISC instructions send less complex (reduced) instructions to the processor.

    'Basic' instructions take one clock cycle to complete.

    CISC processors send more complex instructions as one instruction. This is then stored in transistors in the processor and divided into basic instructions that take one clock cycle each to complete.
    RISC processor send a series of basic instructions straight to the processor that take one clock cycle each to complete.

    For low level programmers, this means that RISC requires more code. However, RISC requires fewer transistors to store instructions so more can be used for other general use (basically make the processor faster).

    And I'll steal this from Stanford to sum up the different approaches: "The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program."

    tl;dr RISC can usually do more instructions in a give amount of time, so is often faster (more efficient). However, it is more complex to code for.

    Now, at last, as to how this affects x86 emulation on ARM: x86's complex instructions need to be broken down into 'basic' instructions that ARM has before being sent to the processor, which increases the time it takes do them. It's kind of like putting an x86 processor on top of the ARM one.

    The reverse is also not great, as 'basic' ARM instructions would need to be compiled into complex ones that x86 has before being sent to the processor.

    I'm not confident that many efficiencies will be found. If CISC stays around, then it'll be just pure brute force computing power that will make emulation of it on RISC nice to use. If it doesn't, well, there will be a lot more RISC programs and quite a few low level coders with jobs (provided we haven't AI'd them into unemployment).
     
    Kumabjorn, JoeS and desertlap like this.
  10. dstrauss

    dstrauss Comic Relief Senior Member

    Messages:
    10,882
    Likes Received:
    9,427
    Trophy Points:
    331
    I think that says it all...
     
Loading...

Share This Page