All the President’s Phones – This Week in Tech 690

IBM buys Red Hat, worst Windows 10 ever, Right to Repair wins, and more.
— What’s in store for Apple’s big event this Tuesday?
— Tim Cook vs the “data industrial complex”
— Amazon’s government controversies
— IBM buys Red Hat for $34 billion – the largest software purchase ever.
— Linus Torvalds is back!
— Trump’s phone may be hacked by Russia and China.
— The midterms may be hacked as well.
— Apple and Amazon might NOT be hacked by China.
— Microsoft still hasn’t released Window 10 update 1809
— The Vatican releases its own version of Pokemon Go with saints instead of Pokemon.
— Elon’s big tunnel under LA opens 12/10.
— Pokemon Go will use step info from Apple and Google.
— Right to Repair suit wins.

Host: Leo Laporte Guests: Ed Bott, Jason Hiner, Rene Ritchie

History of the Microprocessor and the Personal Computer, Part 3

The Datamaster was an all-in-one computer with text-mode CRT display, keyboard, processor, memory, and two 8-inch floppy disk drives all contained in one cabinet. (Photo: Oldcomputers.net)

The Model 5150 wasn’t IBM’s first attempt at building a personal computer, with at least four previous projects being scrapped as the market moved faster than IBM’s corporate decision making. The Intel 8085-equipped System/23 DataMaster business computer also enduring a protracted development starting in February 1978. The DataMaster system entry into the market in July 1981 led to the change in design strategy in addition to members of the design team being assigned work on the new PC project.

IBM’s original plan had been to design the personal computer around Motorola’s 6800 processor at its Austin, Texas research center. IBM marketing had arranged for the PC to be sold through the stores of Sears, Roebuck & Co., and the deal teetered in the balance as Motorola’s 6800 along with its support chips slipped in schedule.

A contingency plan named Project Chess was set up to run concurrently with the Austin design and seemed to gain traction after Atari approached IBM about building a personal computer, if IBM were so inclined to design one. Official IBM sanction was achieved when project director William (Bill) Lowe pledged to have the design finalized in a year. To meet this timescale, Lowe would source from vendors outside IBM.

Project director William Lowe pledged to have the design finalized in a year sourcing components from vendors outside IBM.

What remained was choice of processor and operating system for the PC. Lowe and Estridge were astute enough to realize that IBM’s senior management would not look kindly upon a PC that posed a performance threat to the company’s lucrative business machines (a System/23 DataMaster terminal with printer listed for around $9,900 at the time).

The original intention seems to have been to use an 8-bit processor, which would have allowed MOS Tech’s 6502, Zilog’s Z80, and Intel’s 8085 to be considered. However, IBM engineers favored the use of 16-bit, as did Bill Gates, who lobbied IBM to use 16-bit to fully showcase the operating system he was developing , while the arrival of 32-bit architectures from Motorola and National Semiconductor (the 68000 and 16032 respectively) were set to enter production outside of the one year deadline.

The eventual choice was a compromise of 8-bit and 16-bit to allay concerns over compatibility with existing software and expansion options while reducing the bill of materials from a cheaper processor and support chips that were already available, and to retain a significant performance gap between the PC and IBM’s business machines.

IBM’s decision was made easier as the microprocessor landscape was becoming a war of attrition. MOS Tech was acquired by Commodore after MOS was financially decimated by Texas Instrument’s calculator price war and focus shifted from innovation to capitalizing on the success of the 6502. Western Design Center (WDC) would eventually bring 16-bit computing to the 6500 series, but as with many microprocessor companies, the competition had rendered them all redundant by the time they were ready for market.

Zilog’s fortunes also suffered a downturn, as majority shareholder and later parent company Exxon was happy to see the fledgling company go into breakneck product diversification. R&D expenditure topped 35% of revenue, while the wider range of development caused slippage in its own 16-bit Z8000 processor as Exxon’s demands and the relative managerial inexperience of Federico Faggin became exposed.

Faggin and Ungermann had started Zilog to build microprocessors, but Exxon had bought Zilog as a cog in a machine along with a host of other electronics and software company acquisitions for a grand design they hoped would rival IBM. This would turn into a billion dollar failure.

Zilog’s waning fortunes, even as its Z80 powered a prodigious number of computers, terminals, and industrial machines, also cascaded down upon its second source licensees. AMD’s license for Intel’s 8085 hadn’t translated into an invitation to do likewise with its follow up 8086 processor. For a viable 16-bit processor this left Jerry Sanders with the alternative of approaching Motorola or Zilog as National Semiconductors offering was shaping up as promising much but delivering little.

Full Story: History of the Microprocessor and the Personal Computer, Part 3 – TechSpot.

IBM reportedly considering sale of chip manufacturing operations

IBM is considering a sale of its chip manufacturing operations, the Wall Street Journal reported last night. The company would not stop designing its own chips, however. Just as AMD outsources manufacturing of the chips it designs, IBM “is looking for a buyer for its manufacturing operations, but plans to retain its chip-design capability,” according to the Journal’s source.

The Financial Times reported that IBM appointed Goldman Sachs to “sound out possible buyers for the business” and that “IBM is not wedded to the idea of selling and could also seek a partner with which to create a joint venture for its semiconductor operations.” Potential buyers mentioned in that report include GlobalFoundries (originally an AMD spinoff) and Taiwan Semiconductor Manufacturing Company (TSMC).

IBM designs the chips both for its POWER servers and mainframe computers. IBM’s POWER dominates the Unix server market, while the company’s mainframes dominate that market as well. IBM last month agreed to sell its x86 server business to Lenovo.

Operating chip manufacturing plants is expensive, even for companies as large as IBM. The Journal noted that “[c]hip manufacturing is a very capital-intensive and volatile business” and that selling that portion of its operations could help IBM boost its profitability. “Besides the cost of chip factories and equipment, which frequently run into billions of dollars, IBM and other companies spend heavily on developing new production recipes to keep improving the performance, data-storage capacity, and cost of chips.”

While IBM made the chips for Sony’s PS3 and Microsoft’s Xbox 360, those companies have opted for AMD silicon in their new systems. IBM chips still power the Wii U console.

Financial research firm Sanford C. Bernstein estimated that IBM’s chip manufacturing “generated about $1.75 billion in revenue last year, while losing $130 million in pretax income,” the Journal report said.

An IBM spokesperson told Ars this morning that “IBM does not comment on rumors or speculation.”

via IBM reportedly considering sale of chip manufacturing operations | Ars Technica.

IBM's x86 exit may shake up market and rivals

IBM’s x86 exit may shake up market and rivals
IBM’s reported interest in selling parts of its x86 server business to Lenovo may bring major changes to the global market.
IBM is the third-largest seller of x86 servers by factory revenue, with 15.7 percent of the global market in 2012, according to IDC. That represents $5.6 billion for a company that earned $104.5 billion in revenue last year.
IBM’s share of the x86 server segment has declined over the last several years. In 2010, it had 17.4 percent of the market and $5.5 billion in revenue.
By divesting at least part of its x86 server line, IBM gains additional investment dollars that it can spend on its higher margin efforts, especially its analytics and business intelligence, putting more pressure on rivals in these areas.
Lenovo, which is on its way to becoming the world’s top PC vendor, may gain more than an x86 server line. It may also get, as part of any deal, IBM executive talent and capability to reach North American customers served today by Hewlett-Packard and Dell, said Richard Fichera, an analyst at Forrester.

“No Asian company has figured out to date how to sell to North American enterprises,” Fichera said.
But there is no guarantee that Lenovo will be able to keep and expand on IBM’s x86 server market share. It could lose it as well.
“Anytime there is a shift in players, there is always an opportunity to shift market share,” said Jean Bozman, an analyst at IDC.
Analysts don’t believe IBM will divest all of its x86 systems. It is expected to keep its new integrated systems, its PureSystems, which have been engineered for specific tasks, such as business intelligence and data analysis.

The only unexpected part of an IBM divesture is the timing. The company sold its PC business to Lenovo, it has also exited the hard disk drive and printer manufacturing business.
“IBM has never been shy about divesting businesses,” said Charles King, an analyst at Pund-IT. And as with the PC, printer and disk drives, the low-end x86 server market “is heading further and further into commodity territory.”
Ginni Rometty, IBM’s CEO, appears as interested in jettisoning commodities as her predecessors.
Richard Partridge, a Gartner analyst, said in IBM’s most recent annual report, Rometty makes clear that the firm has no interest in being a commodity seller. “Ours is a different choice: The path of innovation, reinvention and shift to higher value,” she wrote.
For IBM’s x86 customers, Partridge’s advice is to sit tight. It will take months for any divesture to complete. “Once details become clear, then customers can ask how well the different x86 servers integrate with other IBM server lines,” he said.
via IBM’s x86 exit may shake up market and rivals | PCWorld.

The History of the Modern Graphics Processor, Part 1

The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer.
The graphics industry that existed before that largely consisted of a more prosaic 2D, non-PC architecture, with graphics boards better known by their chip’s alphanumeric naming conventions and their huge price tags. 3D gaming and virtualization PC graphics eventually coalesced from sources as diverse as arcade and console gaming, military, robotics and space simulators, as well as medical imaging.
The early days of 3D consumer graphics were a Wild West of competing ideas. From how to implement the hardware, to the use of different rendering techniques and their application and data interfaces, as well as the persistent naming hyperbole. The early graphics systems featured a fixed function pipeline (FFP), and an architecture following a very rigid processing path utilizing almost as many graphics APIs as there were 3D chip makers.
While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks (this is the first installment on a series of four articles) we’ll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry’s consolidation at the turn of the century, and today’s modern GPGPU.
1976 – 1995: The Early Days of 3D Consumer Graphics
The first true 3D graphics started with early display controllers, known as video shifters and video address generators. They acted as a pass-through between the main processor and the display. The incoming data stream was converted into serial bitmapped video output such as luminance, color, as well as vertical and horizontal composite sync, which kept the line of pixels in a display generation and synchronized each successive line along with the blanking interval (the time between ending one scan line and starting the next).
A flurry of designs arrived in the latter half of the 1970s, laying the foundation for 3D graphics as we know them.

Atari 2600 released in September 1977
RCA’s “Pixie” video chip (CDP1861) in 1976, for instance, was capable of outputting a NTSC compatible video signal at 62×128 resolution, or 64×32 for the ill-fated RCA Studio II console.
The video chip was quickly followed a year later by the Television Interface Adapter (TIA) 1A, which was integrated into the Atari 2600 for generating the screen display, sound effects, and reading input controllers. Development of the TIA was led by Jay Miner, who also led the design of the custom chips for the Commodore Amiga computer later on.
In 1978, Motorola unveiled the MC6845 video address generator. This became the basis for the IBM PC’s Monochrome and Color Display Adapter (MDA/CDA) cards of 1981, and provided the same functionality for the Apple II. Motorola added the MC6847 video display generator later the same year, which made its way into a number of first generation personal computers, including the Tandy TRS-80.

IBM PC’s Monochrome Display Adapter
A similar solution from Commodore’s MOS Tech subsidiary, the VIC, provided graphics output for 1980-83 vintage Commodore home computers.
In November the following year, LSI’s ANTIC (Alphanumeric Television Interface Controller) and CTIA/GTIA co-processor (Color or Graphics Television Interface Adaptor), debuted in the Atari 400. ANTIC processed 2D display instructions using direct memory access (DMA). Like most video co-processors, it could generate playfield graphics (background, title screens, scoring display), while the CTIA generated colors and moveable objects. Yamaha and Texas Instruments supplied similar IC’s to a variety of early home computer vendors.
The next steps in the graphics evolution were primarily in the professional fields.
Intel used their 82720 graphics chip as the basis for the $1000 iSBX 275 Video Graphics Controller Multimode Board. It was capable of displaying eight color data at a resolution of 256×256 (or monochrome at 512×512). Its 32KB of display memory was sufficient to draw lines, arcs, circles, rectangles and character bitmaps. The chip also had provision for zooming, screen partitioning and scrolling.
SGI quickly followed up with their IRIS Graphics for workstations — a GR1.x graphics board with provision for separate add-in (daughter) boards for color options, geometry, Z-buffer and Overlay/Underlay.
Full Story: The History of the Modern Graphics Processor – TechSpot.

World’s top supercomputer from ‘09 is now obsolete, will be dismantled

Five years ago, an IBM-built supercomputer designed to model the decay of the US nuclear weapons arsenal was clocked at speeds no computer in the history of Earth had ever reached. At more than one quadrillion floating point operations per second (that’s a million billion, or a “petaflop”), the aptly-named Roadrunner was so far ahead of the competition that it earned the #1 slot on the Top 500 supercomputer list in June 2008, November 2008, and one last time in June 2009.
Today, that computer has been declared obsolete and it’s being taken offline. Based at the US Department of Energy’s Los Alamos National Laboratory in New Mexico, Roadrunner will be studied for a while and then ultimately dismantled. While the computer is still one of the 22 fastest in the world, it isn’t energy-efficient enough to make the power bill worth it.
“During its five operational years, Roadrunner, part of the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program to provide key computer simulations for the Stockpile Stewardship Program, was a workhorse system providing computing power for stewardship of the US nuclear deterrent, and in its early shakedown phase, a wide variety of unclassified science,” Los Alamos lab said in an announcement Friday.
Costing more than $120 million, Roadrunner’s 296 server racks covering 6,000 square feet were connected with InfiniBand and contained 122,400 processor cores. The hybrid architecture used IBM PowerXCell 8i CPUs (an enhanced version of the Sony PlayStation 3 processor) and AMD Opteron dual-core processors. The AMD processors handled basic tasks, with the Cell CPUs “taking on the most computationally intense parts of a calculation—thus acting as a computational accelerator,” Los Alamos wrote.

Los Alamos National Laboratory
“Although other hybrid computers existed, none were at the supercomputing scale,” Los Alamos said. “Many doubted that a hybrid supercomputer could work, so for Los Alamos and IBM, Roadrunner was a leap of faith… As part of its Stockpile Stewardship work, Roadrunner took on a difficult, long-standing gap in understanding of energy flow in a weapon and its relation to weapon yield.”
Roadrunner lost its world’s-fastest title in November 2009 to Jaguar, another Department of Energy supercomputer combining AMD Opterons with Cray processors. Jaguar hit 1.76 petaflops to take the title, and it still exists as part of an even newer cluster called Titan. Titan took the top spot in the November 2012 supercomputers list with a speed of 17.6 petaflops.
Supercomputing researchers are now looking toward exascale speeds—1,000 times faster than a petaflop—but major advances in energy efficiency and price-performance are necessary.
Petaflop machines aren’t automatically obsolete—a petaflop is still speedy enough to crack the top 25 fastest supercomputers. Roadrunner is thus still capable of performing scientific work at mind-boggling speeds, but has been surpassed by competitors in terms of energy efficiency. For example, in the November 2012 ratings Roadrunner required 2,345 kilowatts to hit 1.042 petaflops and a world ranking of #22. The supercomputer at #21 required only 1,177 kilowatts, and #23 (clocked at 1.035 petaflops) required just 493 kilowatts.
“Future supercomputers will need to improve on Roadrunner’s energy efficiency to make the power bill affordable,” Los Alamos wrote. “Future supercomputers will also need new solutions for handling and storing the vast amounts of data involved in such massive calculations.”
After Roadrunner is shut off today, researchers will spend a month doing experiments on “operating system memory compression techniques for an ASC relevant application, and optimized data routing to help guide the design of future capacity cluster computers.” After that, the cluster will finally be dismantled, the end of the world’s first petaflop supercomputer.
via World’s top supercomputer from ‘09 is now obsolete, will be dismantled | Ars Technica.