Nvidia's Plan for ARM to Take X86 's Throne | Exclusive

From what I understand of the M1, it does have some sort of Translation layer so not all the x86 to arm instructions are emulated just a subset of them. If Apple can do it I would have to assume that AMD could as well, question becomes do they have the time to sink the resources into a serious ARM product at this time. I would really hate to see AMD's hot streak come to an end because they stretched themselves too thin on too many projects at once.
The translation layer is from Rosetta 2, not the M1 CPU itself.
This is another reason why MacOS has great performance, and Windows does not, when running x86-64-to-ARM64 instructions and code.
 
The translation layer is from Rosetta 2, not the M1 CPU itself.
This is another reason why MacOS has great performance, and Windows does not, when running x86-64-to-ARM64 instructions and code.
So Mac OS will recompile for the processor in use? If it can?
 
The translation layer is from Rosetta 2, not the M1 CPU itself.
This is another reason why MacOS has great performance, and Windows does not, when running x86-64-to-ARM64 instructions and code.
Not entirely.
https://twitter.com/marcan42/status/1328940799082569729

"So Apple straight up implemented the x86 consistency model on their cores. That's the kind of high-impact detail that makes or breaks emulation performance for a different arch. Did they do this for any other x86-isms? Nobody knows so far."
 
Apple x86 to arm64:
https://www.infoq.com/news/2020/11/rosetta-2-translation/

I wonder if Apple broke any Intel patents? or AMDs? dealing with memory ordering. I guess we will find out rather quickly if the case. So far it looks very inventive and a new approach building from the past endeavors. Also something of note, Apple did this on the most advance process available, 5nm, single chip vice chiplets, 8 core CPU and GPU. What makes me think Apple is now ahead of everyone in designing chips? Results.
 
Apple x86 to arm64:
https://www.infoq.com/news/2020/11/rosetta-2-translation/

I wonder if Apple broke any Intel patents? or AMDs? dealing with memory ordering. I guess we will find out rather quickly if the case. So far it looks very inventive and a new approach building from the past endeavors. Also something of note, Apple did this on the most advance process available, 5nm, single chip vice chiplets, 8 core CPU and GPU. What makes me think Apple is now ahead of everyone in designing chips? Results.

It's patent cold war. Did they, probably. But how many patents does apple own that they can counter sue intel/amd on?

https://www.techspot.com/news/82903-apple-allies-intel-antitrust-case-against-softbank-owned.html

Our patent law in the US is so outdated, basically any big company can sue any other big company for the most trivial of things.
 
Apple x86 to arm64:
https://www.infoq.com/news/2020/11/rosetta-2-translation/

I wonder if Apple broke any Intel patents? or AMDs? dealing with memory ordering. I guess we will find out rather quickly if the case. So far it looks very inventive and a new approach building from the past endeavors. Also something of note, Apple did this on the most advance process available, 5nm, single chip vice chiplets, 8 core CPU and GPU. What makes me think Apple is now ahead of everyone in designing chips? Results.
Na no legal issues. I mean a decade ago Transmeta translated x86 instructions in a software layer. Apple isn't doing anything different.
 
Na no legal issues. I mean a decade ago Transmeta translated x86 instructions in a software layer. Apple isn't doing anything different.
Maybe? The legality of emulation has always been tricky, but nobody has implemented hardware into their hardware in order to emulate better. They straight up implemented some of Intel's design in order to emulate x86 better. So at what point is the M1 a x86 CPU and a ARM CPU? How many more x86 features can Apple incorporate into their silicon before it's considered x86? One could argue that without Rosetta 2 the x86 hardware in the M1 is useless and therefore not considered an x86 CPU. One things for certain is that Intel is preparing their lawyers to look into this.
 
Maybe? The legality of emulation has always been tricky, but nobody has implemented hardware into their hardware in order to emulate better. They straight up implemented some of Intel's design in order to emulate x86 better. So at what point is the M1 a x86 CPU and a ARM CPU? How many more x86 features can Apple incorporate into their silicon before it's considered x86? One could argue that without Rosetta 2 the x86 hardware in the M1 is useless and therefore not considered an x86 CPU. One things for certain is that Intel is preparing their lawyers to look into this.

Well they didn't implement anything from x86 as I understand it. They simply changed how their chip deals with memory ordering... which varies generation to generation frankly. This isn't hardware emulation in anyway. For x86 software that isn't optimized well for newer generation Intel chips it probably hurts more then it helps frankly. Of course clearly addressing memory in the same order as modern compiled x86 software is logical. Its however NOT x86 emulation... not even close.

The thing about memory order is it has to do with rules regarding reads, writes, termination, and speculation. Their are instances where programs can write speculative information and cases where they can not. Reads always have priority, reads can be reordered ahead of writes but in general x86 chips don't support quick read write read write situations they read then write then reorder. According to AMDs white papers for instance a read cannot be reordered ahead of a prior write if the read is from the same location as the write, this causes a stall. There are all sorts of silly memory rules for x86 that software compilers account for. (in general most programmers don't concern themselves with this stuff anymore... no one is writing things one bit at a time anymore peek, poke lol)

As far as patents, keep in mind that most of this stuff for both AMD and Intel is based on university research.... speculative execution methods tend to be dreamed there and implemented by everyone at some point. As I understand what apple has done (if true cause who knows) is arrange their CPU to handle software that is expecting x86 style out of order execution vs ARM out of order execution, which have enough differences to trip up software compiled to expect it. The rules for writing memory blooks between the methods differ slightly... as ARMs implementation is more efficient, and frankly has a few less rules, however I think (I could be wrong) executing x86 code on a ARM chip its possible reads in the wrong order could be causing stalls as the software needs to wait for a block write or a write before a read reorder.

The CPUs also have different rules about their own speculation writes.... one reason AMDs Ryzen chip performance has caught up to Intel was their change of their speculation engine to a setup basically identical to Intels. This was imo one of the major reasons for Intels lead... as most software compilers where arranged to conform to the memory optimization for Intels speculation engine. AMD and Intel now basically use the same algorithm borrowed from academia. IF and again I don't think anyone knows for sure Apple simply added a mode that writes memory with the same rules as the one modern (Intel the last 10 years and AMD the last 4) have been using to prevent read/write stalls, and ordering to put Apples own branch prediction writes in the software expected order. That seems logical... and there isn't anything you can patent there. I am pretty sure you can't patent general CPU memory ordering. Apple hasn't added x86 instructions, which Intel has also found they can't patent anyway.... Cyrix... you can white room x86 instructions all you like. You can also build a chip that does things in the same way as their published white papers... and memory order is hardly a trade secret, they publish the rules. :)

If Apple did add a mode that tweaks their ARM mem opt to match what x86 software expects... frankly what a elegant ingenious solution. And not to bring Transmeta into this again... but that is basically exactly what they did. They added a software level x86->risc translation...
but the chip itself arranged memory and followed all the same rules as Intels chips. Frankly if anyones patents are being infringed.... lol I hope the company holding the transmeta patents aren't trolly. :)

This AMD paper includes their rules for mem writes.
https://www.amd.com/system/files/TechDocs/24593.pdf
 
AFAIK, the consensus was Transmetta would not be able to win the next round of legal fights, and they disappeared before that, anyways.

But in other news,
https://www.crn.com/news/components...t-steve-fields-to-work-on-data-center-systems

Nvidia managed attract an IB Fellow out of retirement (James) Steve Fields. Used to be one of the key POWER guys, by all appearances. Still working out of Austin, where the old IBM POWER uarch team is/was.
 
AFAIK, the consensus was Transmetta would not be able to win the next round of legal fights, and they disappeared before that, anyways.

But in other news,
https://www.crn.com/news/components...t-steve-fields-to-work-on-data-center-systems

Nvidia managed attract an IB Fellow out of retirement (James) Steve Fields. Used to be one of the key POWER guys, by all appearances. Still working out of Austin, where the old IBM POWER uarch team is/was.
I seem to remember Transmeta suing Intel over longrun and.... Intel settling by paying TM 150 million or so and yearly payments of 20 million or something like that for 5 or 6 years. Intel ignored power efficiency and by the time they realized it might be at least somewhat important they just cribbed Transmetas design. Probably realizing they could force them out of business in other ways... and that they would hand the patents over in exchange for Intel not forcing them into a court for a decade. I really don't think Intel could have won that fight.... however they could easily have punished them with years of high end legal fees. Instead Intel got longrun for free.... from a company they would kill with simple sales channel pressure, and better to get the patents they wanted through the front door then deal with whatever patent troll might end up with the patents later.

Anyway ya not really on topic. Accept that in the case of Apple.... I can't really think Intel would even want to try taking Apple to court. Apple has sort of bet the Mac business on this no doubt they would not Settle unless Intel just handed them the keys... and they aren't going to be scared off by a decade long legal battle. A actual ruling might set a precedent they really really don't like. There is good reason all their x86 patent suits end in settlements... Intel knows damn well if a court ever heard one of those cases out the chances of them loosing is extremely high.
 
I seem to remember Transmeta suing Intel over longrun and.... Intel settling by paying TM 150 million or so and yearly payments of 20 million or something like that for 5 or 6 years. Intel ignored power efficiency and by the time they realized it might be at least somewhat important they just cribbed Transmetas design. Probably realizing they could force them out of business in other ways... and that they would hand the patents over in exchange for Intel not forcing them into a court for a decade. I really don't think Intel could have won that fight.... however they could easily have punished them with years of high end legal fees. Instead Intel got longrun for free.... from a company they would kill with simple sales channel pressure, and better to get the patents they wanted through the front door then deal with whatever patent troll might end up with the patents later.

Anyway ya not really on topic. Accept that in the case of Apple.... I can't really think Intel would even want to try taking Apple to court. Apple has sort of bet the Mac business on this no doubt they would not Settle unless Intel just handed them the keys... and they aren't going to be scared off by a decade long legal battle. A actual ruling might set a precedent they really really don't like. There is good reason all their x86 patent suits end in settlements... Intel knows damn well if a court ever heard one of those cases out the chances of them loosing is extremely high.
Eh. Intel minus well keep their legal team busy if they can't advance elsewhere. If apple crosses a line im sure a legal battle will take place and such will have a minimal impact on either of the behemoths.
 
  • Like
Reactions: ChadD
like this
Not entirely.
https://twitter.com/marcan42/status/1328940799082569729

"So Apple straight up implemented the x86 consistency model on their cores. That's the kind of high-impact detail that makes or breaks emulation performance for a different arch. Did they do this for any other x86-isms? Nobody knows so far."

ARM (as most of the architectures: MIPS, POWER,...) implements a weak memory consistency model. "Weak" means that the CPU can reorder loads and stores for greater efficiency and performance. The x86 architecture implements a strong memory consistency model that prohibits reordering. If you are translating x86 code to ARM and the resulting code is reordered freely, it can cause memory faults because the x86 compiler assumes a strong memory model and makes no consistency checks. The solution to this would be to check, during translation, the reordering of memory operations to see if the reordering would cause a fault or if it can be made. It is difficult to confirm that reordering will be not a problem so the translation layer typically has to insert a lot of memory barriers in the code. The time spend on checking the code and/or the introduction of memory barriers increases the overhead of translation and reduces performance. So what Apple has made is to implement a special strong memory model in the hardware, which is activated only when running under x86 emulation mode. When running under Rosetta2, reordering is disabled and memory operations are executed as the x86 compiler issued them.

So Apple is just 'saying' its cores something like "stop optimizing so aggressively when executing translated code". When running native code, the Apple cores just reorder memory operations as rest of ARM cores.

I wonder if Apple broke any Intel patents? or AMDs? dealing with memory ordering. I guess we will find out rather quickly if the case. So far it looks very inventive and a new approach building from the past endeavors. Also something of note, Apple did this on the most advance process available, 5nm, single chip vice chiplets, 8 core CPU and GPU. What makes me think Apple is now ahead of everyone in designing chips? Results.

Enforcing that the cores don't reorder memory operations isn't breaking any patent, because you are just forcing the cores to stop optimizing loads and stores, to execute the code as it was generated by the x86 compiler. IBM has been doing this for about a decade. Since POWER7, IBM implements an equivalent to this called "Strong Access Ordering" mode (SAO). Implementing a stronger memory model allows emulators to more efficiently translate x86 code into POWER code, resulting in faster code execution than if the emulator had to introduce memory barriers in the code to avoid faults. Of course, running translated code in SAO mode is slower than running POWER code with native mode.

the IBM POWER7 processor allows pages to be marked as requiring strong memory ordering. Hardware in the form of a controller unit (described in more detail hereinafter) then ensures that any accesses by any thread to these pages occurs in a strongly ordered fashion, while access to other pages proceed as normal. There is a performance penalty for requiring this ordering though, and in a multi-threaded environment it has hitherto been assumed that all pages may be accessed by all threads, and as such all pages are marked for SAO and incur this cost.
 
Last edited:
Maybe? The legality of emulation has always been tricky, but nobody has implemented hardware into their hardware in order to emulate better. They straight up implemented some of Intel's design in order to emulate x86 better. So at what point is the M1 a x86 CPU and a ARM CPU? How many more x86 features can Apple incorporate into their silicon before it's considered x86? One could argue that without Rosetta 2 the x86 hardware in the M1 is useless and therefore not considered an x86 CPU. One things for certain is that Intel is preparing their lawyers to look into this.

No one is implementing x86 hardware. Moreover, the M1 is not CPU. The M1 is a SoC composed of CPU + GPU + Mem + Neural Engine + DSP + I/O + ···
 
Last edited:
Strong Memory order died with the 486. Everything past then must use specific instructions to control the order (locks, fences/barriers, flushes).

From intel's own manual (here) sec 8.2.1

Memory Ordering in the Intel® Pentium® and Intel486™ Processors
The Pentium and Intel486 processors follow the processor-ordered memory model; however, they operate as
strongly-ordered processors under most circumstances. Reads and writes always appear in programmed order at
the system bus—except for the following situation where processor ordering is exhibited. Read misses are
permitted to go ahead of buffered writes on the system bus when all the buffered writes are cache hits and, therefore,
are not directed to the same address being accessed by the read miss.

In the case of I/O operations, both reads and writes always appear in programmed order.

Software intended to operate correctly in processor-ordered processors (such as the Pentium 4, Intel Xeon, and P6
family processors) should not depend on the relatively strong ordering of the Pentium or Intel486 processors.
Instead, it should ensure that accesses to shared variables that are intended to control concurrent execution
among processors are explicitly required to obey program ordering through the use of appropriate locking or serializing
operations (see Section 8.2.5, “Strengthening or Weakening the Memory-Ordering Model”).
8.2.2

Moreover, there are a lot more specifics to how a modern x86 can control memory and instruction order.

The nuances come into play where the barriers happen. On RISC you have a load/operate/store model where a barrier can happen in specific places, while CISC the barrier covers the full order of the resulting micro-ops.

What I think is happening. If x86 level control is implemented to the exact specifications on an ARM, the resulting code could end up adding more flushes than necessary. Performance would most certainly be less that stellar. They either implemented streaming stores or the page access model mentioned above. The Rosetta performance being so good points to the former. The fact that the M1 runs emulated code in it's own specific space points to the latter.
 
I think people underestimate the barriers to entry here. Sure there is both Windows and Linux on ARM, but all the software...

Apple can pull architecture swaps off because they have a fanatical user base that demands the software vendors follow suit and redevelop for the new arch.

WinTel platforms are pretty entrenched, and I don't think an Nvidia ARM solution will have the same pull behind it to make it take over. Who knows though.
 
I think people underestimate the barriers to entry here. Sure there is both Windows and Linux on ARM, but all the software...

WinTel platforms are pretty entrenched, and I don't think an Nvidia ARM solution will have the same pull behind it to make it take over. Who knows though.
Would the natural entry for an Nvidia ARM solution be an gaming console ?

Legacy issue would be quite different, marketing easy, keeping the complete system under 200 watt making it easier and more natural to compete with the X86 alternative and so on.

Some issue could still exist (how easy for engine and other lib to be transfered), etc... but consoles have an history of going to powerPC, Cells, etc... where supporting previous release is still perceived as a plus and not a requirement (and they could probably run emulated/recompiled for some title)
 
Last edited:
Would the natural entry for an Nvidia ARM solution be an gaming console ?

Legacy issue would be quite different, marketing easy, keeping the complete system under 200 watt making it easier and more natural to compete with the X86 alternative and so on.

Some issue could still exist (how easy for engine and other lib to be transfered), etc... but consoles have an history of going to powerPC, cells where supporting previous release is still perceived as a plus and not a requirement (and they could probably run emulated/recompiled for some title)

This would be a very easy and likely use case. If NVIDIA gets ARM and really wants to grow their own usage they can go after the SoC moarket for consoles very easily. They could now, true, but once they own ARM they have an even greater reason to want to do so beyond just being a licensee looking to make a few bucks.
 
It's odd that they haven't. The Switch uses basically a Tegra X1, and the chips since then (Parker with Pascal GPU, Xavier with Volta GPU, and now Orin with Ampere GPU) have all been significant upticks in graphics and CPU capability (finding actual benchmarks on the GPUs has been difficult, but I'm making this assumption based on what we know of the desktop versions of these GPUs).

But it seems that the biggest use for NVIDIA SoCs, outside of NVIDIA development boards, is... car multimedia systems. Not sure why they haven't tried to leverage that GPU tech to get into more gaming consoles, or at least updated the Shield TV box.
 
Not sure why they haven't tried to leverage that GPU tech to get into more gaming consoles, or at least updated the Shield TV box.
Because developing Consoles would require a company to work WITH Sony and Microsoft, freely sharing knowledge and technology so all parties can reap the benefits. A kick ass APU helps too.
 
It's odd that they haven't. The Switch uses basically a Tegra X1, and the chips since then (Parker with Pascal GPU, Xavier with Volta GPU, and now Orin with Ampere GPU) have all been significant upticks in graphics and CPU capability (finding actual benchmarks on the GPUs has been difficult, but I'm making this assumption based on what we know of the desktop versions of these GPUs).

But it seems that the biggest use for NVIDIA SoCs, outside of NVIDIA development boards, is... car multimedia systems. Not sure why they haven't tried to leverage that GPU tech to get into more gaming consoles, or at least updated the Shield TV box.
Not enough of a market yet. Microsoft has been disinclined to work with them for some reason, so the surface is out so far. Phones- most of the vendors have a preferred arm provider or make their own.,. The car guys need more and more horsepower though, as those systems are doing a lot more than multimedia now.
Tablets on the android side didn’t catch on... and Roku or the cheap guys own the set top box market.
 
Because developing Consoles would require a company to work WITH Sony and Microsoft, freely sharing knowledge and technology so all parties can reap the benefits. A kick ass APU helps too.
Is that a big difference than working with Nintendo ? I guess so, Nintendo you get do it with last gen affair (instead of the new one) ?
 
Last edited:
Yeah, it’s just odd to me. They worked with Sony on PS3, Microsoft on Xbox, Nintendo on Switch, and they’ve got their Shield console.

their recent chips would be terrible for mobile devices - power consumption too high - but Xavier would be a huge upgrade over the current shield console. Orin would be even more so, with its big 65w tdp, but it was only announced a couple years ago so I’m guessing it’s still a year or so out from actual availability.

maybe we’ll see a newer nvidia soc in the rumored new Switch system.
 
Is that a big difference than working with Nintendo ? I guess so, Nintendo you get do it with last gen affair ?
Nintendo is like 80s arcade games. Potato platform. Not exactly cutting edge development. Hardly believe that Nvidia's relationship with Nintendo is as deep as AMD is with MS and Sony. Just look how the co-operative technology has rolled out. Close DX12 development, custom APUs for both MS and Sony from AMD. VRS VRR, The list goes on. I don't really think that the Nintendo<->Nvidia side brings much to the table comparatively.
 
It's odd that they haven't. The Switch uses basically a Tegra X1, and the chips since then (Parker with Pascal GPU, Xavier with Volta GPU, and now Orin with Ampere GPU) have all been significant upticks in graphics and CPU capability (finding actual benchmarks on the GPUs has been difficult, but I'm making this assumption based on what we know of the desktop versions of these GPUs).

But it seems that the biggest use for NVIDIA SoCs, outside of NVIDIA development boards, is... car multimedia systems. Not sure why they haven't tried to leverage that GPU tech to get into more gaming consoles, or at least updated the Shield TV box.
Using X86 makes it far easier to port to PC after, and they get to use the huge libraries of development tools available. To switch to ARM console developers would need an all-new toolset as well as have to learn the ins and outs of a completely different architecture. The Tegra X1 was out for a while and very well documented when it came to the switch and it wasn't too dissimilar to the chips used in the DS. NVidia would have to make one hell of a presentation to score that contract, not impossible but certainly a hard sell.
 
Using X86 makes it far easier to port to PC after, and they get to use the huge libraries of development tools available. To switch to ARM console developers would need an all-new toolset as well as have to learn the ins and outs of a completely different architecture. The Tegra X1 was out for a while and very well documented when it came to the switch and it wasn't too dissimilar to the chips used in the DS. NVidia would have to make one hell of a presentation to score that contract, not impossible but certainly a hard sell.
https://www.techradar.com/news/nvidias-dollar40-billion-arm-acquisition-has-hit-a-speedbump
 
Not unexpected at all, I’m sure NVidia has been preparing for this. But if this deal fails ARM’s in trouble, soft bank isn’t going to keep throwing money at something they loose money on. Eventually they will find a buyer and it’s likely going to be a hedge fund patent troll based out of Brazil.
softbank already lost a lot on it's wework bet


"SoftBank values WeWork at $2.9 billion, down from $47 billion a year ago"

--

"WeWork and SoftBank Group Corp. failed to persuade a U.S. judge to throw out a lawsuit filed by some of the startup’s directors over a canceled $3 billion stock purchase." -- https://www.bloomberg.com/news/arti...nk-lose-bid-to-have-suit-over-deal-thrown-out

--

"SoftBank racks up $3.7bn in losses at tech stock trading unit" -- https://www.ft.com/content/0edb7c17-58e6-4ded-acfa-1822440a926c
 
softbank already lost a lot on it's wework bet


"SoftBank values WeWork at $2.9 billion, down from $47 billion a year ago"

--

"WeWork and SoftBank Group Corp. failed to persuade a U.S. judge to throw out a lawsuit filed by some of the startup’s directors over a canceled $3 billion stock purchase." -- https://www.bloomberg.com/news/arti...nk-lose-bid-to-have-suit-over-deal-thrown-out

--

"SoftBank racks up $3.7bn in losses at tech stock trading unit" -- https://www.ft.com/content/0edb7c17-58e6-4ded-acfa-1822440a926c
They have been loosing money on all their tech ventures.
 
They have been loosing money on all their tech ventures

seems phishy, how are they able to sustain such deep losses nearing hundreds of billions when all added up?

something doesn't add up that they can take such blows
 
Yeah, wework was probably a decent bet until everyone started working from home.

we had two wework offices for locations where we had a lot of employees, but closed both of them late summer I think. No one was going. I imagine a lot of companies did the eh same.
 
seems phishy, how are they able to sustain such deep losses nearing hundreds of billions when all added up?

something doesn't add up that they can take such blows
Because they make Trillions, and those losses are easy tax write offs. But even they can’t keep loosing money hand over fist, it’s not good for business.
 
Because they make Trillions, and those losses are easy tax write offs. But even they can’t keep loosing money hand over fist, it’s not good for business.
proof?

unaware of any company making trillions....

Apple only has a Market Cap of 2 Trillion, Amazon is 1.6 Trillion and that's not revenue as far as i know, just company valuation

SoftBank isn't even close, they've been losing money


"SoftBank posted an operating loss of 1.36 trillion yen, or $12.7 billion, in the fiscal year that ended March 31, its first annual loss in 15 years. It reported a profit of $19.6 billion the previous year. Its net income loss was $894 million.May 17, 2020"

SoftBank's Market Cap is only 159 Billion, so not even close to Apple or Amazon
 
proof?

unaware of any company making trillions....

Apple only has a Market Cap of 2 Trillion, Amazon is 1.6 Trillion and that's not revenue as far as i know, just company valuation

SoftBank isn't even close, they've been losing money


"SoftBank posted an operating loss of 1.36 trillion yen, or $12.7 billion, in the fiscal year that ended March 31, its first annual loss in 15 years. It reported a profit of $19.6 billion the previous year. Its net income loss was $894 million.May 17, 2020"

SoftBank's Market Cap is only 159 Billion, so not even close to Apple or Amazon
It’s better to say they move trillions they put the various tech investments into a managed portfolio along with a bunch of higher performing stocks. They balance each other out investors make money the business losses are covered they write off the losses on their taxes it’s a pretty neat little setup the banks have. But yes their losses are piling up they have been making poor and desperate tech investments trying to make up for their other failed tech investments.
 
It’s better to say they move trillions they put the various tech investments into a managed portfolio along with a bunch of higher performing stocks. They balance each other out investors make money the business losses are covered they write off the losses on their taxes it’s a pretty neat little setup the banks have. But yes their losses are piling up they have been making poor and desperate tech investments trying to make up for their other failed tech investments.
are they offloading the losses to bad shell companies they created to white-wash SoftBank's balance sheets every quarter or what?
 
are they offloading the losses to bad shell companies they created to white-wash SoftBank's balance sheets every quarter or what?
Basically yes. Lots of banks do it. But if they do it too long regulators and investors give them the stink eye, and SoftBank is getting attention so they want to offload the assets write off the losses for good and move on. Pair the write down with a few high profile “retirements” and overall their stock prices will probably go up for it.
 
  • Like
Reactions: erek
like this
Basically yes. Lots of banks do it.
Isn’t that how Enron went bankrupt though through JEDI contracts, LJM and the Raptors special purpose entities companies that Andy Fastow (CFO) setup?
 
Isn’t that how Enron went bankrupt though through JEDI contracts, LJM and the Raptors special purpose entities companies that Andy Fastow (CFO) setup?
Enron went bankrupt sure but some key investors made some serious scratch on its way out. But this isn’t quite the same, it was explained way better on how it works on a Bloomberg or NYTimes article or something like that that I read back when the sale was first announced but before NVidia got publically involved. So I am doing a bad job at regurgitating that story.
 
Enron went bankrupt sure but some key investors made some serious scratch on its way out. But this isn’t quite the same, it was explained way better on how it works on a Bloomberg or NYTimes article or something like that that I read back when the sale was first announced but before NVidia got publically involved. So I am doing a bad job at regurgitating that story.
i don't understand how modern companies like SoftBank can get away with practices that were seemingly once criminal activities?

Lehman bros then CEO, Dick Fuld, was gonna create a bad bank called SpinCo to offload toxic assets to white-wash the balance sheets too... i just don't understand how these practices are legitimate now, but in 2000 were illegal


https://www.businessinsider.com/2008/9/lehman-s-latest-plan-to-save-itself-create-a-bad-bank-
 
Back
Top