FFXI Benchmark On M1 Mac

Eorzea Time
 
 
 
Language: JP EN FR DE
users online
Forum » FFXI » General » FFXI Benchmark on M1 Mac
FFXI Benchmark on M1 Mac
First Page 2 3
 Asura.Epigram
Offline
Server: Asura
Game: FFXI
user: jlisic
Posts: 115
By Asura.Epigram 2020-11-28 21:00:39
Link | Quote | Reply
 
I've been pretty impressed by this little Mac mini I have, so I thought I'd see if I could run at least the benchmark. I used binaries from the open source version of code-weavers and gave it a go. Performance sorta sucks, but it's kind of amazing that it even works. This is 32-bit windows code running through a 32-64bit shim translating x86-64 to arm64.

I'm sure there are a lot of performance tweaks that can be done to make this playable, so FFXI on the M1 is likely possible in the near future. Unlikely that we will see Windower 4 support, but Ashita will probably work as it does for linux.

Here is a video!
YouTube Video Placeholder
[+]
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 02:19:30
Link | Quote | Reply
 
New M1 is an amazing piece of technology. Going to pressure Intel and AMD to step it up.

outdoing every single Intel chip at only 10W lmao. I'm waiting for the Hitler intel video
 Shiva.Mlrlohki
Offline
Server: Shiva
Game: FFXI
user: Mlrlohki
Posts: 85
By Shiva.Mlrlohki 2020-11-29 02:32:13
Link | Quote | Reply
 
I was kinda wondering about this kinda thing. Had wondered if the new Mac could run the FFXIV Mac client. Would be cool if I could afford one.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 03:41:50
Link | Quote | Reply
 
Yeah. I've been really impressed with the Mac Mini. $200 extra for a 512gb ssd is obscene imo. Even for Apple.
[+]
Offline
Posts: 363
By ksoze 2020-11-29 03:48:10
Link | Quote | Reply
 
Was watching this and it seems intel is gonna be in trouble

YouTube Video Placeholder
[+]
 Bismarck.Siggymund
Premium
Offline
Server: Bismarck
Game: FFXI
user: Sigmund
Posts: 95
By Bismarck.Siggymund 2020-11-29 05:58:12
Link | Quote | Reply
 
ksoze said: »
Was watching this and it seems intel is gonna be in trouble

Wow that was interesting, now we need its to run ffxi to keep our 30fps locked lol
Offline
Posts: 42642
By Jetackuu 2020-11-29 09:41:22
Link | Quote | Reply
 
Asura.Arico said: »
Yeah. I've been really impressed with the Mac Mini. $200 extra for a 512gb ssd is obscene imo. Even for Apple.
likely nvme and not sata, so not really. Despite being crapple garbage.

edit: somehow read 2tb ssd on this, nevermind.
 Leviathan.Isiolia
Offline
Server: Leviathan
Game: FFXI
user: Isiolia
Posts: 458
By Leviathan.Isiolia 2020-11-29 09:47:51
Link | Quote | Reply
 
Apple has only ever built in NVMe style SSDs, part of the reason for their proprietary connector was doing so before the standard was really settled on.

However, given what even a 1TB NVMe SSD costs these days, $200 is kind of expensive. That said, at least with storage you can just add a Thunderbolt 3 drive. Can't really add to the RAM though, which was the bigger thing that turned me off from ordering one of these on release.
Offline
Posts: 42642
By Jetackuu 2020-11-29 10:04:38
Link | Quote | Reply
 
I mean a 2tb nvme is $200 (or more), so.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 11:41:15
Link | Quote | Reply
 
ksoze said: »
Was watching this and it seems intel is gonna be in trouble

YouTube Video Placeholder

Thanks for finding this video. The final quote really set in.


I can't believe the performance tbh. For first generation product. And how well Rosetta 2 is working.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9701
By Asura.Saevel 2020-11-29 13:26:31
Link | Quote | Reply
 
Asura.Icilies said: »
New M1 is an amazing piece of technology. Going to pressure Intel and AMD to step it up.

outdoing every single Intel chip at only 10W lmao. I'm waiting for the Hitler intel video

Not even close or even in the same ballpark. What they are doing is comparing a low power mobile chip to other older low power mobile chips and just forgetting to mention that part. A 10W 5nm chip should beat a 12W 10-14nm chip (that's Intel right now). Intel's own 7nm technology isn't out yet which is why AMD has been beating them so badly with Ryzen and TSMC's 7nm technology. Apple's design is just a normal big little ARM SoC, the only thing "new" is that they spent the money to get it made at TSMC's latest technology node which is 5nm. The lower the node size the more transistors you can pack into the same space while also having less leakage current so less power / higher clock rates.

Quick class on uArch designs

That all being said, there is a reason we don't see many high power ARM designs, it's uarch is wide but shallow with an emphasis on low power usage. This is kinda similar to how SPARC and MIPS work though SPARC has it's own quirks. Individual core designs are very simple with not much silicon dedicated to prefetch, prediction, loop unrolling and caching. This lets us build a system that can process a ridiculous number of unrelated tasks simultaneously and / or use a very small amount of power. In contrast uarchs like x86 and PPC/Power are narrow but deep, meaning individual processing cores are more complex with extra space dedicated to insitu instruction optimization, prediction and caching. This allows much faster single thread / task performance at the expense of having latency during multitudes of simultaneous tasks.

Basically all the extra die space on x86 and Power chips is from the massive amounts of cache they employ to fuel the complex prefetch/decode/prediction engines.

Zen 2 uArch
https://en.wikichip.org/wiki/amd/microarchitectures/zen_2#Architecture

Intel Ice Lake uArch

https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(client)

ARM 77 uArch (M1 block diagrams aren't available yet)
https://en.wikichip.org/wiki/arm_holdings/microarchitectures/cortex-a77

Low power ARM chips aren't anywhere near capable of competing with high power or even mid range desktop CPUs. However they are very good for mobile and lower power devices where batteries are the limit. Just go give an idea, the GeForce 2070 is made at TSMC 12nm and runs 175W of power usage. The new GeForce 3070 made at TSMC 7nm runs at 220W of power usage. Desktop CPU's then run in the 65 to 120W range depending on model. Combined that is more then an order of magnitude thermal envelope of what an M1 does.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9701
By Asura.Saevel 2020-11-29 13:47:51
Link | Quote | Reply
 
Good break down

https://www.extremetech.com/computing/317228-apples-new-m1-soc-looks-great-is-not-faster-than-98-percent-of-pc-laptops

The M1 is great at lower power computing due to it's amazing efficiencies. Physics is rather harsh mistress though and scaling performance up is much harder the bigger you go. The very things that give x86 high performance at the top end hurt them badly at the bottom end, which is why we don't see x86 powering phones. Conversely the things that enable design's like ARM to work so well at the bottom prevent them from scaling beyond that.

In other words, for gaming x86 and similarly designed uarchs are always going to win against ARM / SPARC / MIPS designed uarchs.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9701
By Asura.Saevel 2020-11-29 14:39:03
Link | Quote | Reply
 
And after some digging I can now see whats really happening in those benchmarks, Apple packaged the system memory locally to the CPU instead of having it cross board interconnects.

https://mjtsai.com/blog/2020/11/23/m1-memory-and-performance/

Most modern desktop memory has 24~30GBps memory bandwidth and the M1's on package memory is around 60GBps. This means any benchmark that heavily relies on remote memory access will end up running much better on the M1 then anything else, likely even better if the algorithm is run with CUDA/OpenCL on a GPU.

This has been possible in the PC market for decades, it's not done due to expense and the limitation it imposes. On-package memory means you are forever stuck with that memory limit and need to "upgrade" the entire package to get more. Not a problem for mobile devices, big issue for desktops.

Just for example I ran geekbench on this computer. I designed my own custom water loop with thermal sensors that spin up / down various fans based on utilization. Geekbench never caused the system to spin up and only once did it cause a single cure to hit 50c for a few seconds before everything dropped back down to ~30c. What should of happened was the cooling system spinning up to deal with one or more cores running at 100% and temps hovering around 45~55 on all cores. So take those with a huge grain of salt, they are really just testing the memory interface not the computational capacity.
[+]
Offline
Posts: 8846
By SimonSes 2020-11-29 14:47:27
Link | Quote | Reply
 
Asura.Saevel said: »
This has been possible in the PC market for decades, it's not done due to expense and the limitation it imposes. On-package memory means you are forever stuck with that memory limit and need to "upgrade" the entire package to get more. Not a problem for mobile devices, big issue for desktops.

No issue for apple fans too. They have no problem upgrading to whole new laptop XD
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9701
By Asura.Saevel 2020-11-29 15:21:58
Link | Quote | Reply
 
Yeah HBM's been out for awhile now, AMD uses it for their graphics cards.

https://en.wikipedia.org/wiki/High_Bandwidth_Memory

https://www.tomshardware.com/reviews/glossary-hbm-hbm2-high-bandwidth-memory-definition,5889.html

Apple used the same concept with custom DDR4 chips. I'm going to wait until GCC works with it, then guys like Agner will be able to start tearing things apart and see whats really happening under the hood. I'm always suspicious of benchmarks when some new hardware is released, usually it's just PR meant to sell the product before reality sets in.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 16:14:24
Link | Quote | Reply
 
Jetackuu said: »
Asura.Arico said: »
Yeah. I've been really impressed with the Mac Mini. $200 extra for a 512gb ssd is obscene imo. Even for Apple.
likely nvme and not sata, so not really. Despite being crapple garbage.

Right, but the difference between a high-end 256gb and a high-end 512gb nvme is like $20, not $200.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9701
By Asura.Saevel 2020-11-29 16:30:27
Link | Quote | Reply
 
250GB NVMe for $70 USD

https://www.amazon.com/Samsung-970-EVO-Plus-MZ-V7S1T0B/dp/B07MG119KG/ref=sr_1_3?dchild=1&keywords=Samsung%2B1tb%2Bnvme&qid=1606688672&sr=8-3&th=1

The 500GB version s $80 USD

The 1TB version is $150 USD

Extremely high end "Pro" version

https://www.amazon.com/Samsung-970-PRO-Internal-MZ-V7P1T0BW/dp/B07BYHGNB5/ref=sr_1_1?dchild=1&keywords=Samsung+980+Pro&qid=1606688869&sr=8-1

512GB for $170 USD, 1TB for $350 USD.

Yes Apples prices are outrageous, that is their business model. We can think of them as the Gucci / Channel of electronic devices as they us identical business models. Sell a product as a designer brand, develop a loyal following to that brand, then milk that following for as much money as possible.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 16:52:53
Link | Quote | Reply
 
Asura.Saevel said: »
Asura.Icilies said: »
New M1 is an amazing piece of technology. Going to pressure Intel and AMD to step it up.

outdoing every single Intel chip at only 10W lmao. I'm waiting for the Hitler intel video

Not even close or even in the same ballpark. What they are doing is comparing a low power mobile chip to other older low power mobile chips and just forgetting to mention that part. A 10W 5nm chip should beat a 12W 10-14nm chip (that's Intel right now). Intel's own 7nm technology isn't out yet which is why AMD has been beating them so badly with Ryzen and TSMC's 7nm technology. Apple's design is just a normal big little ARM SoC, the only thing "new" is that they spent the money to get it made at TSMC's latest technology node which is 5nm. The lower the node size the more transistors you can pack into the same space while also having less leakage current so less power / higher clock rates.

Quick class on uArch designs

That all being said, there is a reason we don't see many high power ARM designs, it's uarch is wide but shallow with an emphasis on low power usage. This is kinda similar to how SPARC and MIPS work though SPARC has it's own quirks. Individual core designs are very simple with not much silicon dedicated to prefetch, prediction, loop unrolling and caching. This lets us build a system that can process a ridiculous number of unrelated tasks simultaneously and / or use a very small amount of power. In contrast uarchs like x86 and PPC/Power are narrow but deep, meaning individual processing cores are more complex with extra space dedicated to insitu instruction optimization, prediction and caching. This allows much faster single thread / task performance at the expense of having latency during multitudes of simultaneous tasks.

Basically all the extra die space on x86 and Power chips is from the massive amounts of cache they employ to fuel the complex prefetch/decode/prediction engines.

Zen 2 uArch
https://en.wikichip.org/wiki/amd/microarchitectures/zen_2#Architecture

Intel Ice Lake uArch

https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(client)

ARM 77 uArch (M1 block diagrams aren't available yet)
https://en.wikichip.org/wiki/arm_holdings/microarchitectures/cortex-a77

Low power ARM chips aren't anywhere near capable of competing with high power or even mid range desktop CPUs. However they are very good for mobile and lower power devices where batteries are the limit. Just go give an idea, the GeForce 2070 is made at TSMC 12nm and runs 175W of power usage. The new GeForce 3070 made at TSMC 7nm runs at 220W of power usage. Desktop CPU's then run in the 65 to 120W range depending on model. Combined that is more then an order of magnitude thermal envelope of what an M1 does.


Wrong.

Youtube. Hundreds of Videos at this point
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 16:57:15
Link | Quote | Reply
 
Asura.Saevel said: »
Asura.Icilies said: »
New M1 is an amazing piece of technology. Going to pressure Intel and AMD to step it up.

outdoing every single Intel chip at only 10W lmao. I'm waiting for the Hitler intel video

Not even close or even in the same ballpark. What they are doing is comparing a low power mobile chip to other older low power mobile chips and just forgetting to mention that part. A 10W 5nm chip should beat a 12W 10-14nm chip (that's Intel right now). Intel's own 7nm technology isn't out yet which is why AMD has been beating them so badly with Ryzen and TSMC's 7nm technology. Apple's design is just a normal big little ARM SoC, the only thing "new" is that they spent the money to get it made at TSMC's latest technology node which is 5nm. The lower the node size the more transistors you can pack into the same space while also having less leakage current so less power / higher clock rates.

Quick class on uArch designs

That all being said, there is a reason we don't see many high power ARM designs, it's uarch is wide but shallow with an emphasis on low power usage. This is kinda similar to how SPARC and MIPS work though SPARC has it's own quirks. Individual core designs are very simple with not much silicon dedicated to prefetch, prediction, loop unrolling and caching. This lets us build a system that can process a ridiculous number of unrelated tasks simultaneously and / or use a very small amount of power. In contrast uarchs like x86 and PPC/Power are narrow but deep, meaning individual processing cores are more complex with extra space dedicated to insitu instruction optimization, prediction and caching. This allows much faster single thread / task performance at the expense of having latency during multitudes of simultaneous tasks.

Basically all the extra die space on x86 and Power chips is from the massive amounts of cache they employ to fuel the complex prefetch/decode/prediction engines.

Zen 2 uArch
https://en.wikichip.org/wiki/amd/microarchitectures/zen_2#Architecture

Intel Ice Lake uArch

https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(client)

ARM 77 uArch (M1 block diagrams aren't available yet)
https://en.wikichip.org/wiki/arm_holdings/microarchitectures/cortex-a77

Low power ARM chips aren't anywhere near capable of competing with high power or even mid range desktop CPUs. However they are very good for mobile and lower power devices where batteries are the limit. Just go give an idea, the GeForce 2070 is made at TSMC 12nm and runs 175W of power usage. The new GeForce 3070 made at TSMC 7nm runs at 220W of power usage. Desktop CPU's then run in the 65 to 120W range depending on model. Combined that is more then an order of magnitude thermal envelope of what an M1 does.

Regardless of how much tech jargin you pack into a post the fact is this chip is outperforming in real world applications nearly every single intel chip. And its already head to head with Ryzen AND at less power consumption. Obviously gaming is not what is a Mac is going for however this shows how far behind intel is in integrated performance.
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2115
By Shiva.Thorny 2020-11-29 16:59:51
Link | Quote | Reply
 
Asura.Icilies said: »
Wrong.

Youtube. Hundreds of Videos at this point
This might be hard for a streamer to understand, but Youtube is actually not a source, and the person making a claim is responsible for providing evidence to back it.

M1 is absolutely not better than 'every single intel chip', it's better than some mobile chips in the 30-60 watt region and about even with intel's best mobile offerings. It shows potential, and could be great for mobile devices.

Until they can prove a way to scale it up to compete, it is garbage when compared to desktop chips. It is not going to revolutionize your mobile gaming device either, most of your power goes to GPU when gaming.

Asura.Icilies said: »
Regardless of how much tech jargin you pack into a post the fact is this chip is outperforming in real world applications nearly every single intel chip. And its already head to head with Ryzen AND at less power consumption. Obviously gaming is not what is a Mac is going for however this shows how far behind intel is in integrated performance.
It's only head to head with ryzen if you look at single core performance. Sorry to tell you, but similar per-core performance with 1/4 the cores is not better. It is not in the same league as intel's desktop cpus either.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:02:07
Link | Quote | Reply
 
Shiva.Thorny said: »
Asura.Icilies said: »
Wrong.

Youtube. Hundreds of Videos at this point
This might be hard for a streamer to understand, but Youtube is actually not a source, and the person making a claim is responsible for providing evidence to back it.

M1 is absolutely not better than 'every single intel chip', it's better than some mobile chips in the 30-60 watt region and about even with intel's best mobile offerings. It shows potential, and could be great for mobile devices.

Until they can prove a way to scale it up to compete, it is garbage when compared to desktop chips. It is not going to revolutionize your mobile gaming device either, most of your power goes to GPU when gaming.

Asura.Icilies said: »
Regardless of how much tech jargin you pack into a post the fact is this chip is outperforming in real world applications nearly every single intel chip. And its already head to head with Ryzen AND at less power consumption. Obviously gaming is not what is a Mac is going for however this shows how far behind intel is in integrated performance.
It's only head to head with ryzen if you look at single core performance. Sorry to tell you, but similar per-core performance with 1/4 the cores is not better. It is not in the same league as intel's desktop cpus either.

Lol. You all in denial. When visual evidence is "not real". Literally you can find doezens of videos highlighting performance across chips in different W ranges. All the way to the 150W.

Are you stuck in thinking gaming is the only application for processors?
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2115
By Shiva.Thorny 2020-11-29 17:05:13
Link | Quote | Reply
 
https://www.cpu-monkey.com/en/cpu-apple_m1-1804

'you can find dozens of videos' means nothing. We have accepted tools for benchmarking processors, and it falls near the top end of mobile CPUs in some tests and near the middle in others. There are no benchmarks where it has been shown to compete with desktop CPUs.
You are falling for marketing, or a rabid apple fanboy.

For reference, the 4600U slightly beating it in multi-core test is only a middle of the road ryzen mobile CPU. The top end ones are 50% better, and desktop ones are 200% better.
[+]
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 17:06:28
Link | Quote | Reply
 
Asura.Icilies said: »

Wrong.

Youtube. Hundreds of Videos at this point

Hey guys welcome to ArmIsTheFuture Youtube channel we're here today to compare the iPad Air on the a 14 Bionic with an intel 10900k on a super niche 20 year old benchmark. Who will win? WHAT?! THE A14 BEAT INTEL?!
[+]
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:08:24
Link | Quote | Reply
 
Shiva.Thorny said: »
https://www.cpu-monkey.com/en/cpu-apple_m1-1804

'you can find dozens of videos' means nothing. We have accepted tools for benchmarking processors, and it falls near the top end of mobile CPUs in some tests and near the middle in others. You are falling for marketing, or a rabid apple fanboy.

Oh, "we", the elite lol.

Let's see your examples of real world application utilizing different chips.

I'm assuming you don't have anything but let's see.
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2115
By Shiva.Thorny 2020-11-29 17:10:49
Link | Quote | Reply
 
Real world application is just a fancy way to say 'anecdotal observation'. You have provided literally nothing to support your case, I just gave you a link to the benchmark scores the chip has produced. Benchmarks are a practical way to a$$ess (really filter?) performance because they remove any user bias by giving a finite score based on finite criteria.

You are claiming a 10W mobile chip is beating 150W desktop cpus. The burden of proof is on you, as that's an absolutely ridiculous claim. If you have any 'real world examples' conducted to a scientific standard I'd love to see them. I would absolutely love for CPUs to take a giant leap forward like this. I just don't see any evidence for it having happened.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 17:13:47
Link | Quote | Reply
 
Asura.Icilies said: »
Oh, "we", the elite lol.

Let's see your examples of real world application utilizing different chips.

I think it's funny that both Apple and Intel use this argument for their chips being better.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:24:43
Link | Quote | Reply
 
Shiva.Thorny said: »
Real world application is just a fancy way to see 'anecdotal observation'. You have provided literally nothing to support your case, I just gave you a link to the benchmark scores the chip has produced. Benchmarks are a practical way to a$$ess (really filter?) performance because they remove any user bias by giving a finite score based on finite criteria.

You are claiming a 10W mobile chip is beating 150W desktop cpus. The burden of proof is on you, as that's an absolutely ridiculous claim. If you have any 'real world examples' conducted to a scientific standard I'd love to see them. I would absolutely love for CPUs to take a giant leap forward like this. I just don't see any evidence for it having happened.

I mean. I don't need to prove anything.

I was able to witness at 10w Chip processing Canon RAW R5 4k 60 footage with perfect scrubbing. Which an Intel cannot do. Even with a 3000 dollar processor.

Let's not forget fanless
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:26:32
Link | Quote | Reply
 
Like I said earlier.

Even if you are going to ignore and give no credit to Apple.

This chip will benefit fans of all companies because it's going to challenge them to innovate. Which in the end is what will help keep those desktop cpu's down in price.
Offline
Posts: 42642
By Jetackuu 2020-11-29 17:28:16
Link | Quote | Reply
 
Asura.Icilies said: »
I mean. I don't need to prove anything.
You kind of do when you make the claim, that is kind of how it works.

Anecdotal nonsense is just that: nonsense.


Even if you are going to ignore and give no credit to Apple.

Give credit to them for what? Taking an existing technology, slapping their label on it and calling it innovation?

Yeah, no.
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2115
By Shiva.Thorny 2020-11-29 17:30:21
Link | Quote | Reply
 
I didn't give 'no credit' to apple, I said that your claim is not in line with reality. It is a significant improvement for mobile devices, but it is not competitive with desktop cpus. I have said nothing bad about apple, nor the chip. I said it won't revolutionize your laptop, which is true, it is an improvement but not a colossal one.

You dropped another anecdote, mention some other irrelevant things, and change the topic. Have you been indoctrinated into a cult? Is Steve Jobs holding you hostage? A mobile cpu doesn't have to beat a top tier desktop cpu to be good. But, claiming it does just makes you sound ridiculous. Focus on the reality and it is still a pretty cool advancement.
First Page 2 3