Afleveringen
-
Nelumbo nucifera, or the sacred lotus, is a plant that grows in flood plains, rivers, and deltas. Their seeds can remain dormant for years and when floods come along, blossom into a colony of plants and flowers. Some of the oldest seeds can be found in China, where theyâre known to represent longevity. No surprise, given their level of nitrition and connection to the waters that irrigated crops by then. They also grow in far away lands, all the way to India and out to Australia. The flower is sacred in Hinduism and Buddhism, and further back in ancient Egypt.
Padmasana is a Sanskrit term meaning lotus, or Padma, and Asana, or posture. The Pashupati seal from the Indus Valley civilization shows a diety in whatâs widely considered the first documented yoga pose, from around 2,500 BCE. 2,700 years later (give or take a century), the Hindu author and mystic Patanjali wrote a work referred to as the Yoga Sutras. Here he outlined the original asanas, or sitting yoga poses. The Rig Veda, from around 1,500 BCE, is the oldest currently known Vedic text. It is also the first to use the word âyogaâ. It describes songs, rituals, and mantras the Brahmans of the day used - as well as the Padma. Further Vedic texts explore how the lotus grew out of Lord Vishnu with Brahma in the center. He created the Universe out of lotus petals. Lakshmi went on to grow out of a lotus from Vishnu as well.
It was only natural that humans would attempt to align their own meditation practices with the beautiful meditatios of the lotus. By the 300s, art and coins showed people in the lotus position. It was described in texts that survive from the 8th century. Over the centuries contradictions in texts were clarified in a period known as Classical Yoga, then Tantra and and Hatha Yoga were developed and codified in the Post-Classical Yoga age, and as empires grew and India became a part of the British empire, Yoga began to travel to the west in the late 1800s. By 1893, Swami Vivekananda gave lectures at the Parliament of Religions in Chicago.
More practicioners meant more systems of yoga. Yogendra brought asanas to the United States in 1919, as more Indians migrated to the United States. Babajiâs kriya yoga arrived in Boston in 1920. Then, as weâve discussed in previous episodes, the United States tightened immigration in the 1920s and people had to go to India to get more training. Theos Bernardâs Hatha Yoga: The Report of a Personal Experience brought some of that knowledge home when he came back in 1947. Indra Devi opened a yoga studio in Hollywood and wrote books for housewives. She brought a whole system, or branch home. Walt and Magana Baptiste opened a studio in San Francisco. Swamis began to come to the US and more schools were opened. Richard Hittleman began to teach yoga in New York and began to teach on television in 1961. He was one of the first to seperate the religious aspect from the health benefits. By 1965, the immigration quotas were removed and a wave of teachers came to the US to teach yoga.
The Beatles went to India in 1966 and 1968, and for many Transcendental Meditation took root, which has now grown to over a thousand training centers and over 40,000 teachers. Swamis opened meditation centers, institutes, started magazines, and even magazines. Yoga became so big that Rupert Holmes even poked fun of it in his song âEscape (The Piña Colada Song)â in 1979. Yoga had become part of the counter-culture, and the generation that followed represented a backlash of sorts.
A common theme of the rise of personal computers is that the early pioneers were a part of that counter-culture. Mitch Kapor graduated high school in 1967, just in time to be one of the best examples of that. Kapor built his own calculator in as a kid before going to camp to get his first exposure to programming on a Bendix. His high school got one of the 1620 IBM minicomputers and he got the bug. He went off to Yale at 16 and learned to program in APL and then found Computer Lib by Ted Nelson and learned BASIC. Then he discovered the Apple II.
Kapor did some programming for $5 per hour as a consultant, started the first east coast Apple User Group, and did some work around town. There are generations of people who did and do this kind of consulting, although now the rates are far higher. He met a grad student through the user group named Eric Rosenfeld who was working on his dissertation and needed some help programming, so Kapor wrote a little tool that took the idea of statistical analysis from the Time Shared Reactive Online Library, or TROLL, and ported it to the microcomputer, which he called Tiny Troll.
Then he enrolled in the MBA program at MIT. He got a chance to see VisiCalc and meet Bob Frankston and Dan Bricklin, who introduced him to the team at Personal Software. Personal Software was founded by Dan Fylstra and Peter Jennings when they published Microchips for the KIM-1 computer. That led to ports for the 1977 Trinity of the Commodore PET, Apple II, and TRS-80 and by then they had taken Bricklin and Franstonâs VisiCalc to market. VisiCalc was the killer app for those early PCs and helped make the Apple II successful.
Personal Software brought Kapor on, as well as Bill Coleman of BEA Systems and Electronic Arts cofounder Rich Mellon. Today, software developers get around 70 percent royalties to publish software on app stores but at the time, fees were closer to 8 percent, a model pulled from book royalties. Much of the rest went to production of the box and disks, the sales and marketing, and support. Kapor was to write a product that could work with VisiCalc. By then Rosenfeld was off to the world of corporate finance so Kapor moved to Silicon Valley, learned how to run a startup, moved back east in 1979, and released VisiPlot and VisiTrend in 1981. He made over half a million dollars in the first six months in royalties.
By then, he bought out Rosenfeldâs shares in what he was doing, hired Jonathan Sachs, who had been at MIT earlier, where he wrote the STOIC programming language, and then went to work at Data General. Sachs worked on spreadsheet ideas at Data General with a manager there, John Henderson, but after they left Data General, and the partnership fell apart, he worked with Kapor instead. They knew that for software to be fast, it needed to be written in a lower level language, so they picked the Intel 8088 assembly language given that C wasnât fast enough yet. The IBM PC came in 1981 and everything changed. Mitch Kapor and Jonathan Sachs started Lotus in 1982.
Sachs got to work on what would become Lotus 1-2-3. Kapor turned out to be a great marketer and product manager. He listened to what customers said in focus groups. He pushed to make things simpler and use less jargon. They released a new spreadsheet tool in 1983 and it worked flawlessly on the IBM PC and while Microsoft had Multiplan and VisCalc was the incumbent spreadsheet program, Lotus quickly took market share from then and SuperCalc.
Conceptually it looked similar to VisiCalc. They used the letter A for the first column, B for the second, etc. That has now become a standard in spreadsheets. They used the number 1 for the first row, the number 2 for the second. That too is now a standard. They added a split screen, also now a standard. They added macros, with branching if-then logic. They added different video modes, which could give color and bitmapping. They added an underlined letter so users could pull up a menu and quickly select the item they wanted once they had those orders memorized, now a standard in most menuing systems. They added the ability to add bar charts, pie charts, and line charts. One could even spread their sheet across multiple monitors like in a magazine. They refined how fields are calculated and took advantage of the larger amounts of memory to make Lotus far faster than anything else on the market.
They went to Comdex towards the end of the year and introduced Lotus 1-2-3 to the world. The software could be used as a spreadsheet, but the 2 and 3 referred to graphics and database management. They did $900,000 in orders there before they went home. They couldnât even keep up with the duplication of disks. Comdex was still invitation only. It became so popular that it was used to test for IBM compatibility by clone makers and where VisiCalc became the app that helped propel the Apple II to success, Lotus 1-2-3 became the app that helped propel the IBM PC to success.
Lotus was rewarded with $53 million in sales for 1983 and $156 million in 1984. Mitch Kapor found himself. They quickly scaled from less than 20 to 750 employees. They brought in Freada Klein who got her PhD to be the Head of Employee Relations and charged her with making them the most progressive employer around. After her success at Lotus, she left to start her own company and later married. Sachs left the company in 1985 and moved on to focus solely on graphics software. He still responds to requests on the phpBB forum at dl-c.com.
They ran TV commercials. They released a suite of Mac apps they called Lotus Jazz. More television commercials. Jazz didnât go anywhere and only sold 20,000 copies. Meanwhile, Microsoft released Excel for the Mac, which sold ten times as many. Some blamed the lack os sales on the stringent copy protection. Others blamed the lack of memory to do cool stuff. Others blamed the high price. It was the first major setback for the young company.
After a meteoric rise, Kapor left the company in 1986, at about the height of their success. He replaced himself with Jim Manzi. Manzi pushed the company into network applications. These would become the center of the market but were just catching on and didnât prove to be a profitable venture just yet. A defensive posture rather than expanding into an adjacent market would have made sense, at least if anyone knew how aggressive Microsoft was about to get it would have.
Manzi was far more concerned about the millions of illegal copies of the software in the market than innovation though. As we turned the page to the 1990s, Lotus had moved to a product built in C and introduced the ability to use graphical components in the software but not wouldnât be ported to the new Windows operating system until 1991 for Windows 3. By then there were plenty of competitors, including Quattro Pro and while Microsoft Excel began on the Mac, it had been a showcase of cool new features a windowing operating system could provide an application since released for Windows in 1987. Especially what they called 3d charts and tabbed spreadsheets.
There was no catching up to Microsoft by then and sales steadily declined. By then, Lotus released Lotus Agenda, an information manager that could be used for time management, project management, and as a database. Kapor was a great product manager so it stands to reason he would build a great product to manage products. Agenda never found commercial success though, so was later open sourced under a GPL license.
Bill Gross wrote Magellan there before he left to found GoTo.com, which was renamed to Overture and pioneered the idea of paid search advertising, which was acquired by Yahoo!. Magellan cataloged the internal drive and so became a search engine for that. It sold half a million copies and should have been profitable but was cancelled in 1990. They also released a word processor called Manuscript in 1986, which never gained traction and that was cancelled in 1989, just when a suite of office automation apps needed to be more cohesive.
Ray Ozzie had been hired at Software Arts to work on VisiCalc and then helped Lotus get Symphony out the door. Symphony shipped in 1984 and expanded from a spreadsheet to add on text with the DOC word processor, and charts with the GRAPH graphics program, FORM for a table management solution, and COM for communications. Ozzie dutifully shipped what he was hired to work on but had a deal that he could build a company when they were done that would design software that Lotus would then sell. A match made in heaven as Ozzie worked on PLATO and borrowed the ideas of PLATO Notes, a collaboration tool developed at the University of Illinois Champagne-Urbana to build what he called Lotus Notes.
PLATO was more more than productivity. It was a community that spanned decades and Control Data Corporation had failed to take it to the mass corporate market. Ozzie took the best parts for a company and built it in isolation from the rest of Lotus. They finally released it as Lotus Notes in 1989. It was a huge success and Lotus bought Iris in 1994. Yet they never found commercial success with other socket-based client server programs and IBM acquired Lotus in 1995. That product is now known as Domino, the name of the Notes 4 server, released in 1996. Ozzie went on to build a company called Groove Networks, which was acquired by Microsoft, who appointed him one of their Chief Technology Officers. When Bill Gates left Microsoft, Ozzie took the position of Chief Software Architect he vacated. He and Dave Cutler went on to work on a project called Red Dog, which evolved into what we now know as Microsoft Azure.
Few would have guessed that Ozzie and Kaporâs handshake agreement on Notes could have become a real product. Not only could people not understand the concept of collaboration and productivity on a network in the late 1980s but the type of deal hadnât been done. But Kapor by then realized that larger companies had a hard time shipping net-new software properly. Sometimes those projects are best done in isolation. And all the better if the parties involved are financially motivated with shares like Kapor wanted in Personal Software in the 1970s before he wrote Lotus 1-2-3.
VisiCalc had sold about a million copies but that would cease production the same year Excel was released. Lotus hung on longer than most who competed with Microsoft on any beachhead they blitzkrieged. Microsoft released Exchange Server in 1996 and Notes had a few good years before Exchange moved in to become the standard in that market. Excel began on the Mac but took the market from Lotus eventually, after Charles Simonyi stepped in to help make the product great.
Along the way, the Lotus ecosystem created other companies, just as they were born in the Visi ecosystem. Symantec became what we now call a âportfolioâ company in 1985 when they introduced NoteIt, a natural language processing tool used to annotate docs in Lotus 1-2-3. But Bill Gates mentioned Lotus by name multiple times as a competitor in his Internet Tidal Wave memo in 1995. He mentioned specific features, like how they could do secure internet browsing and that they had a web publisher tool - Microsoftâs own FrontPage was released in 1995 as well. He mentioned an internet directory project with Novell and AT&T. Active Directory was released a few years later in 1999, after Jim Allchin had come in to help shepherd LAN Manager. Notes itself survived into the modern era, but by 2004 Blackberry released their Exchange connector before they released the Lotus Domino connector. Thatâs never a good sign.
Some of the history of Lotus is covered in Scott Rosenbergâs 2008 book, Dreaming in Code. Others are documented here and there in other places. Still others are lost to time.
Kapor went on to invest in UUNET, which became a huge early internet service provider. He invested in Real Networks, who launched the first streaming media service on the Internet. He invested in the creators of Second Life. He never seemed vindictive with Microsoft but after AOL acquired Netscape and Microsoft won the first browser war, he became the founding chair of the Mozilla Foundation and so helped bring Firefox to market. By 2006, Firefox took 10 percent of the market and went on to be a dominant force in browsers. Kapor has also sat on boards and acted as an angel investor for startups ever since leaving the company he founded.
He also flew to Wyoming in 1990 after he read a post on The WELL from John Perry Barlow. Barlow was one of the great thinkers of the early Internet. They worked with Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Frontier Foundation, or EFF. The EFF has since been the nonprofit who leads the fight for âdigital privacy, free speech, and innovation.â So not everything is about business.
-
We covered computer and internet copyright law in a previous episode. That type of law began with interpretations that tried to take the technology out of cases so they could be interpreted as though what was being protected was a printed work, or at least it did for a time. But when it came to the internet, laws, case law, and their knock-on effects, the body of jurisprudence work began to diverge.
Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, was a law passed in the late 1980s that shields online portals and internet service providers from copyright infringement. Copyright infringement is one form of immunity, but more was needed. Section 230 was another law that protects those same organizations from being sued for 3rd party content uploaded on their sites. Thatâs the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of âbig techâ cases (and the damage many claim is being done to democracy) may find it refined or limited.
Thatâs because the concept of Internet Exceptionalism itself is being reconsidered now that the internet is here to stay. Internet Exceptionalism is a term that notes that laws that diverge from precedents for other forms of media distribution. For example, a newspaper can be sued for liable or defamation, but a website is mostly shielded from such suits because the internet is different. Pages are available instantly, changes be made instantly, and the reach is far greater than ever before. The internet has arguably become the greatest tool to spread democracy and yet potentially one of its biggest threats. Which some might have argued about newspapers, magazines, and other forms of print media in centuries past.
The very idea of Internet Exceptionalism has eclipsed the original intent. Chris Cox and Ron Widen initially intended to help fledgling Internet Service Providers (ISPs) jumpstart content on the internet. The internet had been privatized in 1995 and companies like CompuServe, AOL, and Prodigy were already under fire for the content on their closed networks. Cubby v CompuServe in 1991 had found that online providers werenât considered publishers of content and couldnât be held liable for free speech practiced on their platforms in part because they did not exercise editorial control of that content. Stratton Oakmont v Prodigy found that Prodigy did have editorial control (and in fact advertised themselves as having a better service because of it) and so could be found liable like a newspaper would. Cox and Widen were one of the few conservative and liberal pairs of lawmakers who could get along in the decisive era when Newt Gingrich came to power and tried to block everything Bill Clinton tried to do.
Yet there were aspects of the United States that were changing outside of politics. Congress spent years negotiating a telecommunications overhaul bill that came to be known as The Telecommunications Act of 1996. New technology led to new options. Some saw content they found to be indecent and so the Communications Decency Act (or Title V of the Telecommunications Act) was passed in 1996, but in Reno v ACLU found to be a violation of the first amendment, and struck down by the Supreme Court in 1997. Section 230 of that act was specifically about the preservation of free speech and so severed from the act and stood alone. It would be adjudicated time and time and eventually became an impenetrable shield that protects online providers from the need to scan every message posted to a service to see if it would get them sued. Keep in mind that society itself was changing quickly in the early 1990s. Tipper Gore wanted to slap a label on music to warn parents that it had explicit lyrics. The âSatanic Panicâ as itâs called by history reused tropes such as cannibalism and child murder to give the moral majority an excuse to try to restrict that which they did not understand. Conservative and progressive politics have always been a 2 steps forward and 1 step back truce. Heavy metal would seem like nothinâ once parents heard the lyrics of gagster rap.
But Section 230 continued on. It stated that âNo provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.â It only took 27 words to change the world. They said that the people that host the content canât be sued for the content because, as courts interpreted it, itâs free speech. Think of a public forum like a hall on a college campus that might restrict one group from speaking and so suppress speech or censer a group. Now, Section 230 didnât say it wasnât allowed to screen material but instead shielded providers from being held liable for that material. The authors of the bill felt that if providers would be held liable for any editing that they wouldnât do any. Now providers could edit some without reviewing every post. And keep in mind the volume of posts in message boards and of new websites had already become too much in the late 1990s to be manually monitored. Further, as those companies became bigger business they became more attractive to law suits.
Section 230 had some specific exclusions. Any criminal law could still be applied, as could state, sex trafficking, and privacy laws. Intellectual property laws also remained untouched, thus OCILLA. To be clear, reading the law, the authors sought to promote the growth of the internet - and it worked. Yelp gets sued over revues but cases are dismissed. Twitter can get sued over a Tweet when someone doesnât like what is said, but itâs the poster and not Twitter who is liable. Parody sites, whistleblower sites, watchdog sites, revue sites, blogs, and an entire industry was born, which each player of what would later be known as the Web 2.0 market could self-regulate themselves.
Those businesses grew far beyond the message boards of the 1990s. This was also a time when machine learning became more useful. A site like Facebook could show a feed of posts not in reverse chronological order, but instead by ârelevance.â Google could sell ads and show them based on the relevance of a search term. Google could buy YouTube and they could have ads on videos. Case after case poked at the edges of what could be used to hold a site liable. The fact that the courts saw a post on Reddit as free speech, no matter how deplorable the comments, provided a broad immunity to liability that was, well, exceptional in a way.
Some countries could fine or imprison people if they posted something negative about the royal family or party in charge. Some of those countries saw the freedom of speech so important as a weapon that could be used against the US in a way. The US became a safe haven in a way to free speech and many parts of the internet were anonymous. In this way (as was previously done with films and other sources of entertainment and news) the US began to export the culture of free speech. But every country also takes imports. Some of those were real, true ideas homegrown or brought in from abroad. Early posters of message boards maybe thought the Armenian Genocide was a hoax - or the Holocaust. A single post could ruin a career. Craigslist allowed for sex trafficking and while they eventually removed that, sites like Backpage have received immunity. So even some of the exceptions are, um, not. Further, extremist groups use pages to spread propaganda and even recruit soldiers to spread terror.
The courts found that sites were immune to suits over fake profiles on dating sites - even if it was a famous person and the person was getting threatening calls. The courts initially found sites needed to take down content if they were informed it was libelous - but have received broad immunity even when they donât due to the sheer amount of content. Batzel v Smith saw a lawyers firm ruined over false reports she was the granddaughter of Nazi Heinrich Himmler and the beneficiary of Nazi art theft, even though she wasnât - she too lost her case. Sites provide neutral tools and so are shielded from defamation - even if theyâre neutralish you rarely see them held to account. In Goddard v. Google, the Google Keyword Tool recommended that advertisers include the word âfreeâ in mobile content, which Goddard claimed led to fraudulent subscription service recruitment. This was machine learning-based recommendations. The court again found provided the Keyword Tool was neutral that advertisers could adopt or reject the recommendation.
Still, time and time again the idea of safe harbor for internet companies and whether internet exceptionalism should continue comes up. The internet gave a voice to the oppressed, but also to the oppressors. Thatâs neutrality in a way, except that the oppressors (especially when state sponsored actors are involved) often have more resources to drown out other voices, just like in real life. Some have argued a platform like Facebook should be held accountable for their part in the Capitol riots, which is to say as a place where people practiced free speech. Others look to Backpage as facilitating the exploitation of children or as a means of oppression. Others still see terrorist networks as existing and growing because of the ability to recruit online.
The Supreme Court is set to hear docket number 21-1333 in 2022. Gonzalez v. Google was brought by Reynaldo Gonzalez, and looks at whether 230 can immunize Google even though they have made targeted recommendations - in this case when ISIS used YouTube vides to recruit new members - through the recommendation algorithm. An algorithm that would be neutral. But does a platform as powerful have a duty to do more, especially when thereâs a chance that Section 230 bumps up against anti-terrorism legislation. Again and again the district courts in the United States have found section 230 provides broad immunization to online content providers. Now, the Supreme Court will weigh in. After that, billions of dollars may have to be pumped into better content filtration or they may continue to apply broad first amendment guidance.
The Supreme Court is packed with âoriginalistsâ. They still have phones, which the framers did not. The duty that common law places on those who can disseminate negligent or reckless content has lost the requirement for reasonable care due to the liability protections afforded purveyors of content by Section 230. This has given rise to hate speech and misinformation. John Perry Barlowâs infamous A Declaration of the Independence of Cyberspace in protest of the CDA was supported by Section 230 of that same law. But the removal of the idea and duty of reasonable care and the exemptions have now removed any accountability from what seems like any speech. Out of the ashes of accountability the very concept of free speech and where the duty of reasonable care lies may be reborn. We now have the ability to monitor via machine learning, weâve now redefined what it means to moderate, and thereâs now a robust competition for eyeballs on the internet. Weâve also seen how a lack of reasonable standards can lead to real life consequences and that an independent cyberspace can bleed through into the real world.
If the Supreme Court simply upholds findings from the past then the movement towards internet sovereignty may accelerate or may stay the same. Look to where venture capital flows for clues as to how the First Amendment will crash into the free market, and see if its salty waters leave data and content aggregators with valuations far lower than where they once were. The asset of content may some day become a liability with injuries that could provide an existential threat to the owner. The characters may walk the astral plane but eventually must return to the prime material plane along their tether to take a long rest or face dire consequences. The world simply canât continue to become more and more toxic - and yet thereâs a reason the First Amendment is, well, first.
Check out Twenty-Six Words Created the Internet. What Will It Take to Save It?
-
Zijn er afleveringen die ontbreken?
-
Bluetooth The King
Ragnar Lodbrok was a legendary Norse king, conquering parts of Denmark and Sweden. And if weâre to believe the songs, he led some of the best raids against the Franks and the the loose patchwork of nations Charlemagne put together called the Holy Roman Empire.We use the term legendary as the stories of Ragnar were passed down orally and donât necessarily reconcile with other written events. In other words, itâs likely that the man in the songs sung by the bards of old are likely in fact a composite of deeds from many a different hero of the norse.
Ragnar supposedly died in a pit of snakes at the hands of the Northumbrian king and his six sons formed a Great Heathen Army to avenge their father. His sons ravaged modern England int he wake of their fathers death before becoming leaders of various lands they either inherited or conquered. One of those sons, Sigurd Snake-in-the-Eye, returned home to rule his lands and had children, including Harthacnut. He in turn had a son named Gorm.Gorm the Old was a Danish king who lived to be nearly 60 in a time when life expectancy for most was about half that. Gorm raised a Jelling stone in honor of his wife Thyra. As did his son, in the honor of his wife. That stone is carved with runes that say:
âKing Haraldr ordered this monument made in memory of Gormr, his father, and in memory of ThyrvĂ©, his mother; that Haraldr who won for himself all of Denmark and Norway and made the Danes Christian.â
That stone was erected by a Danish king named Herald Gormsson. He converted to Christianity as part of a treaty with the Holy Roman Emperor of the day. He united the tribes of Denmark into a kingdom. One that would go on to expand the reach and reign of the line. Just as Bluetooth would unite devices. Even the logo is a combination of runes that make up his initials HB. Once united, their descendants would go on to rule Denmark, Norway, and England. For a time. Just as Bluetooth would go on to be an important wireless protocol. For a time.
Personal Area Networks
Many early devices shipped with infrared so people could use a mouse or keyboard. But those never seemed to work so great. And computers with a mouse and keyboard and drawing pad and camera and Zip drive and everything else meant that not only did devices have to be connected to sync but they also had to pull a lot of power and create an even bigger mess on our desks.What the world needed instead was an inexpensive chip that could communicate wirelessly and not pull a massive amount of power since some would be in constant communication. And if we needed a power cord then might as well just use USB or those RS-232 interfaces (serial ports) that were initially developed in 1960 - that were slow and cumbersome. And we could call this a Personal Area Network, or PAN.
The Palm Pilot was popular, but docking and pluging in that serial port was not exactly optimal. Yet every ATX motherboard had a port or two. So a Bluetooth Special Interest Group was formed to conceive and manage the standard in 1988 and while initially had half a dozen companies now has over 30,000. The initial development started in the late 1990s with Ericcson. It would use short-range UHF radio waves in the 2.402 GHz and 2.48 GHz bands to exchange data with computers and cell phones, which were evolving into mobile devices at the time.
The technology was initially showcased at COMDEX in 1999. Within a couple of years there were phones that could sync, kits for cars, headsets, and chips that could be put into devices - or cards or USB adapters, to get a device to sync 721 Kbps. We could add 2 to 8 Bluetooth secondary devices that paired to our primary. They then frequency hopped using their Bluetooth device address provided by the primary, which sends a radio signal to secondaries with a range of addresses to use. The secondaries then respond with the frequency and clock state. And unlike a lot of other wireless technologies, it just kindaâ worked.
And life seemed good. Bluetooth went to the IEEE, which had assigned networking the 802 standard with Ethernet being 802.3 and Wi-Fi being 802.11. So Personal Area Networks became 802.15, with Bluetooth 1.1 becoming 802.15.1. And the first phone shipped in 2001, the Sony Ericsson T39.
Bluetooth 2 came in 2005 and gave us 2.1 Mbps speeds and increased the range from 10 to 30 meters. By then, over 5 million devices were shipping every week. More devices mean we have a larger attack surface space. And security researchers were certainly knocking at the door. Bluetooth 2.1 added secure simple pairing. Then Bluetooth 3 in 2009 bringing those speeds up to 24 Mbps and once connected allowing Wi-Fi to pick up connections once established. But we were trading speed for energy and this wasnât really the direction Bluetooth needed to go. Even if a billion devices had shipped by the end of 2006.
Bluetooth 4
The mobility era was upon us and it was increasingly important, not just for the ARM chips, but also for the rest of the increasing number of devices, to use less power. Bluetooth 4 came along in 2010 and was slower at 1 Mbps, but used less energy. This is when the iPhone 4S line fully embraced the technology, helping make it a standard.While not directly responsible for the fitness tracker craze, it certainly paved the way for a small coin cell battery to run these types of devices for long periods of time. And it allowed for connecting devices 100 meters, or well over 300 feet away. So leave the laptop in one room and those headphones should be fine in the next.
And while weâre at it, maybe we want those headphones to work on two different devices. This is where Multipoint comes into play. Thatâs the feature of Bluetooth 4 that allows those devices to pass seamlessly between the phone and the laptop, maintaining a connection to each. Apple calls their implementation of this feature Handoff.
Bluetooth 5 came in 2016, allowing for connections up to 240 meters, or around 800 feet. Well, according to whatâs between us and our devices, as with other protocols. We also got up to 2 Mbps, which dropped as we moved further away from devices. Thus we might get buffering issues or slower transfers with weaker connections. But not outright dropping the connection.
Bluetooth Evolves
Bluetooth was in large part developed to allow our phones to sync to our computers. Most donât do that any more. And the developers wanted to pave the way for wireless headsets. But it also allowed us to get smart scales, smart bulbs, wearables like smart watches and glasses, Bluetooth printers, webcams, keyboards, mice, GPS devices, thermostats, and even a little device that tells me when I need to water the plants. Many home automation devices, or IoT as we seem to call them these days began as Bluetooth but given that we want them to work when we take all our mostly mobile computing devices out of the home, many of those have moved over to Wi-Fi these days.Bluetooth was initially conceived as a replacement for the serial port. Higher throughput needs moved to USB and USB-C. Lower throughput has largely moved to Bluetooth, with the protocol split between Low Energy and higher bandwidth application which with high definition audio now includes headphones. Once the higher throughput needs went to parallel and SCSI but now there are so many other options.
And the line is blurred between what goes where. Billions of routers and switches have been sold, billions of wireless access points. Systems on a Chip now include Wi-Fi and Bluetooth together on the same chip. The programming languages for native apps have also given us frameworks and APIs where we can establish a connection over 5G, Wi-Fi, or Bluetooth, and then hand them off where the needs diverge. Seamless to those who use our software and elegant when done right.
Today over four billion bluetooth devices ship per year, growing at about 10 percent a year. The original needs that various aspects of Bluetooth was designed for have moved to other protocols and the future of the Personal Area Network may be at least in part moved to Wi-Fi or 5G. But for now itâs a standard that has aged well and continues to make life easier for those who use it.
-
One of the hardest parts of telling any history, is which innovations are significant enough to warrant mention. Too much, and the history is so vast that it can't be told. Too few, and it's incomplete. Arguably, no history is ever complete. Yet there's a critical path of innovation to get where we are today, and hundreds of smaller innovations that get missed along the way, or are out of scope for this exact story.
Children have probably been placing sand into buckets to make sandcastles since the beginning of time. Bricks have survived from round 7500BC in modern-day Turkey where humans made molds to allow clay to dry and bake in the sun until it formed bricks. Bricks that could be stacked. And it wasnât long before molds were used for more. Now we can just print a mold on a 3d printer.
A mold is simply a block with a hollow cavity that allows putting some material in there. People then allow it to set and pull out a shape. Humanity has known how to do this for more than 6,000 years, initially with lost wax casting with statues surviving from the Indus Valley Civilization, stretching between parts of modern day Pakistan and India. That evolved to allow casting in gold and silver and copper and then flourished in the Bronze Age when stone molds were used to cast axes around 3,000 BCE. The Egyptians used plaster to cast molds of the heads of rulers. So molds and then casting were known throughout the time of the earliest written works and so the beginning of civilization.The next few thousand years saw humanity learn to pack more into those molds, to replace objects from nature with those we made synthetically, and ultimately molding and casting did its part on the path to industrialization. As we came out of the industrial revolution, the impact of all these technologies gave us more and more options both in terms of free time as humans to think as well as new modes of thinking. And so in 1868 John Wesley Hyatt invented injection molding, patenting the machine in 1872. And we were able to mass produce not just with metal and glass and clay but with synthetics. And more options came but that whole idea of a mold to avoid manual carving and be able to produce replicas stretched back far into the history of humanity.
So here we are on the precipice of yet another world-changing technology becoming ubiquitous. And yet not. 3d printing still feels like a hobbyists journey rather than a mature technology like we see in science fiction shows like Star Trek with their replicators or printing a gun in the Netflix show Lost In Space. In fact the initial idea of 3d printing came from a story called Things Pass By written all the way back in 1945!
I have a love-hate relationship with 3D printing. Some jobs just work out great. Others feel very much like personal computers in the hobbyist era - just hacking away until things work. Itâs usually my fault when things go awry. Just as it was when I wanted to print things out on the dot matrix printer on the Apple II. Maybe I fed the paper crooked or didnât check that there was ink first or sent the print job using the wrong driver. One of the many things that could go wrong.
But those fast prints donât match with the reality of leveling and cleaning nozzles and waiting for them to heat up and pulling filament out of weird places (how did it get there, exactly)! Or printing 10 add-ons for a printer to make it work the way it probably should have out of the box.
Another area where 3d printing is similar to the early days of the personal computer revolution is that there are a few different types of technology in use today. These include color-jet printing (CJP), direct metal printing (DMP), fused deposition modeling (FDM), Laser Additive Manufacturing (LAM, multi-jet printing (MJP), stereolithography (SLA), selective laser melting (SLM), and selective laser sintering (SLS). Each could be better for a given type of print job to be done. Some forms have flourished while others are either their infancy or have been abandoned like extinct languages.
Language isolates are languages that donât fit into other families. Many are the last in a branch of a larger language family tree. Others come out of geographically isolated groups. Technology also has isolates. Konrad Zuse built computers in pre-World War II Germany and after that arenât considered to influence other computers. In other words, every technology seems to have a couple of false starts. Hideo Kodama filed the first patent to 3d print in 1980 - but his method of using UV lights to harden material doesnât get commercialized.
Another type of 3d printing includes printers that were inkjets that shot metal alloys onto surfaces. Inkjet printing was invented by Ichiro Endo at Canon in the 1950s, supposedly when he left a hot iron on a pen and ink bubbled out. Thus the âBubble jetâ printer. And Jon Vaught at HP was working on the same idea at about the same time. These were patented and used to print images from computers over the coming decades.
Johannes Gottwald patented a printer like this in 1971. Experiments continued through the 1970s when companies like Exxon were trying to improve various prototyping processes. Some of their engineers joined an inventor Robert Howard in the early 1980s to found a company called Howtek and they produced the Pixelmaster, using hot-melt inks to increment the ink jet with solid inks, which then went on to be used by Sanders Prototype, which evolved into a company called Solidscape to market the Modelmaker. And some have been used to print solar cells, living cells, tissue, and even edible birthday cakes.
That same technique is available with a number of different solutions but isnât the most widely marketable amongst the types of 3D printers available.
SLA
Thereâs often a root from which most technology of the day is derived. Charles, or Chuck, Hull coined the term stereolithography, where he could lay down small layers of an object and then cure the object with UV light, much as the dentists do with fillings today. This is made possibly by photopolymers, or plastics that are easily cured by an ultraviolet light. He then invented the stereolithography apparatus, or SLA for short, a machine that printed from the bottom to the top by focusing a laser on photopolymer while in a liquid form to cure the plastic into place. He worked on it in 1983, filed the patent in 1984, and was granted the patent in 1986.Hull also developed a file format for 3D printing called STL. STL files describe the surface of a three-dimensional object, geometrically using Cartesian coordinates. Describing coordinates and vectors means we can make objects bigger or smaller when weâre ready to print them. 3D printers print using layers, or slices. Those can change based on the filament on the head of a modern printer, the size of the liquid being cured, and even the heat of a nozzle. So the STL file gets put into a slicer that then converts the coordinates on the outside to the polygons that are cured. These are polygons in layers, so they may appear striated rather than perfectly curved according to the size of the layers. However, more layers take more time and energy. Such is the evolution of 3D printing.
Hull then founded a company called 3D Systems in Valencia California to take his innovation to market. They sold their first printer, the SLA-1 in 1988. New technologies start out big and expensive. And that was the case with 3D Systems. They initially sold to large engineering companies but when solid-state lasers came along in 1996 they were able to provide better systems for cheaper.
Languages also have other branches. Another branch in 3d printing came in 1987, just before the first SLA-1 was sold.
Carl Deckard and his academic adviser Joe Beaman at the University of Texas worked on a DARPA grant to experiment with creating physical objects with lasers. They formed a company to take their solution to market called DTM and filed a patent for what they called selective laser sintering. This compacts and hardens a material with a heat source without having to liquify it. So a laser, guided by a computer, can move around a material and harden areas to produce a 3D model. Now in addition to SLA we had a second option, with the release of the Sinterstation 2500plus. Then 3D Systems then acquired DTM for $45 million in 2001.
FDM
After Hull published his findings for SLA and created the STL format, other standards we use today emerged. FDM is short for Fused Deposition Modeling and was created by Scott Crump in 1989. He then started a company with his wife Lisa to take the product to market, taking the company public in 1994. Crumpâs first patent expired in 2009.In addition to FDM, there are other formats and techniques. AeroMat made the first 3D printer that could produce metal in 1997. These use a laser additive manufacturing process, where lasers fuse powdered titanium alloys. Some go the opposite direction and create out of bacteria or tissue. That began in 1999, when Wake Forest Institute of Regenerative medicine grew a 3D printed urinary bladder in a lab to be used as a transplant. We now call this bioprinting and can take tissue and lasers to rebuild damaged organs or even create a new organ. Organs are still in their infancy with success trials on smaller animals like rabbits. Another aspect is printing dinner using cell fibers from cows or other animals.
There are a number of types of materials used in 3D printing. Most printers today use a continuous feed of one of these filaments, or small coiled fibers of thermoplastics that melt instead of burn when theyâre heated up. The most common in use today is PLA, or polylactic acid, is a plastic initially created by Wall Carothers of DuPont, the same person that brought us nylon, neoprene, and other plastic derivatives. It typically melts between 200 and 260 degrees Celsius. Printers can also take ABS filament, which is short for acrylonitrile-butadien-styerene. Other filament types include HIPS, PET, CPE, PVA, and their derivative forms.
Filament is fed into a heated extruder assembly that melts the plastic. Once melted, filament extrudes into place through a nozzle as a motor sends the nozzle on a x and y axis per layer.
Once a layer of plastic is finished being delivered to the areas required to make up the desired slice, the motor moves the extruder assembly up or down on a z axis between layers. Filament is just between 1.75 millimeters and 3 millimeters and comes in spools between half a kilogram and two kilograms.
These thermoplastics cool very quickly. Once all of the slices are squirted into place, the print is removed from the bed and the nozzle cools off. Filament comes in a number of colors and styles. For example, wood fibers can be added to filament to get a wood-grained finish. Metal can be added to make prints appear metallic and be part metal.
Printing isnât foolproof, though. Filament often gets jammed or the spool gets stuck, usually when something goes wrong. Filament also needs to be stored in a temperature and moisture controlled location or it can cause jobs to fail. Sometimes the software used to slice the .stl file has an incorrect setting, like the wrong size of filament. But in general, 3D printing using the FDM format is pretty straight forward these days. Yet this is technology that should have moved faster in terms of adoption. The past 10 years have seen more progress than the previous ten though. Primarily due to the maker community.
Enter the Makers
The FDM patent expired in 2009. In 2005, a few years before the FDM patent expired, Dr. Adrian Bowyer started a project to bring inexpensive 3D printers to labs and homes around the world. That project evolved into what we now call the Replicating Rapid Prototyper, or RepRap for short.RepRap evolved into an open source concept to create self-replicating 3D printers and by 2008, the Darwin printer was the first printer to use RepRap. As a community started to form, more collaborators designed more parts. Some were custom parts to improve the performance of the printer, or replicate the printer to become other printers. Others held the computing mechanisms in place. Some even wrote code to make the printer able to boot off a MicroSD card and then added a network interface so files could be uploaded to the printer wirelessly.
There was a rising tide of printers. People were reading about what 3D printers were doing and wanted to get involved. There was also a movement in the maker space, so people wanted to make things themselves. There was a craft to it. Part of that was wanting to share. Whether that was at a maker space or share ideas and plans and code online. Like the RepRap team had done.
One of those maker spaces was NYC Resistor, founded in 2007. Bre Pettis, Adam Mayer, and Zach Smith from there took some of the work from the RepRap project and had ideas for a few new projects theyâd like to start. The first was a site that Zach Smith created called Thingiverse. Bre Pettis joined in and they allowed users to upload .stl files and trade them. Itâs now the largest site for trading hundreds of thousands of designs to print about anything imaginable. Well, everything except guns.
Then comes 2009. The patent for FDM expires and a number of companies respond by launching printers and services. Almost overnight the price for a 3D printer fell from $10,000 to $1,000 and continued to drop. Shapeways had created a company the year before to take files and print them for people. Pettis, Mayer, and Smith from NYC Resistor also founded a company called MakerBot Industries.
Theyâd already made a little bit of a name for themselves with the Thingiverse site. They knew the mind of a maker. And so they decided to make a kit to sell to people that wanted to build their own printers. They sold 3,500 kits in the first couple of years. They had a good brand and knew the people who bought these kinds of devices. So they took venture funding to grow the company. So they raised $10M in funding in 2011 in a round led by the Foundry Group, along with Bezos, RRE, 500 Startups and a few others.
They hired and grew fast. Smith left in 2012 and they were getting closer and closer with Stratasys, who if we remember were the original creators of FDM. So Stratasys ended up buying out the company in 2013 for $403M. Sales were disappointing so there was a changeup in leadership, with Pettis leaving and theyâve become much more about additive manufacturing than a company built to appeal to makers. And yet the opportunity to own that market is still there.
This was also an era of Kickstarter campaigns. Plenty of 3D printing companies launched through kickstarter including some to take PLA (a biodegradable filament) and ABS materials to the next level. The ExtrusionBot, the MagicBox, the ProtoPlant, the Protopasta, Mixture, Plybot, Robo3D, Mantis, and so many more.
Meanwhile, 3D printing was in the news. 2011 saw the University of Southhampton design a 3d printed aircraft. Ecologic printing cars, and practically every other car company following suit that they were fabricating prototypes with 3d printers, even full cars that ran. Some on their own, some accidentally when parts are published in .stl files online violating various patents.
Ultimaker was another RepRap company that came out of the early Darwin reviews. Martijn Elserman, Erik de Bruin, and Siert Wijnia who couldnât get the Darwin to work so they designed a new printer and took it to market. After a few iterations, they came up with the Ultimaker 2 and have since been growing and releasing new printers
A few years later, a team of Chinese makers, Jack Chen, Huilin Liu, Jingke Tang, Danjun Ao, and Dr. Shengui Chen took the RepRap designs and started a company to manufacturing (Do It Yourself) kits called Creality. They have maintained the open source manifesto of 3D printing that they inherited from RepRap and developed version after version, even raising over $33M to develop the Ender6 on Kickstarter in 2018, then building a new factory and now have the capacity to ship well over half a million printers a year.
The future of 3D Printing
We can now buy 3D printing pens, over 170 3D Printer manufacturers including 3D systems, Stratasys, and Ceality but also down-market solutions like Fusion3, Formlabs, Desktop Metal, Prusa, and Voxel8. Thereâs also a RecycleBot concept and additional patents expiring every year.There is little doubt that at some point, instead of driving to Home Depot to get screws or basic parts, weâll print them. Need a new auger for the snow blower? Just print it. Cover on the weed eater break? Print it. Need a dracolich mini for the next Dungeons and Dragons game? Print it. Need a new pinky toe. OK, maybe thatâs a bit far. Or is it? In 2015, Swedish Cellink releases bio-ink made from seaweed and algae, which could be used to print cartilage and later released the INKREDIBLE 3D printer for bio printing.
The market in 2020 was valued at $13.78 billion with 2.1 million printers shipped. Thatâs expected to grow at a compound annual growth rate of 21% for the next few years. But a lot of that is healthcare, automotive, aerospace, and prototyping still. Apple made the personal computer simple and elegant. But no Apple has emerged for 3D printing. Instead it still feels like the Apple II era, where there are 3D printers in a lot of schools and many offer classes on generating files and printing.
3D printers are certainly great for prototypers and additive manufacturing. Theyâre great for hobbyists, which we call makers these days. But there will be a time when there is a printer in most homes, the way we have electricity, televisions, phones, and other critical technologies. But there are a few things that have to happen first, to make the printers easier to use. These include:
Every printer needs to automatically level. This is one of the biggest reasons jobs fail and new users become frustrated. More consistent filament. Spools are still all just a little bit different. Printers need sensors in the extruder that detect if a job should be paused because the filament is jammed, humid, or caught. This adds the ability to potentially resume print jobs and waste less filament and time. Automated slicing in the printer microcode that senses the filament and slices. Better system boards (e.g. thereâs a tool called Klipper that moves the math from the system board on a Creality Ender 3 to a Raspberry Pi). Cameras on the printer should watch jobs and use TinyML to determine if they are going to fail as early as possible to halt printing so it can start over. Most of the consumer solutions donât have great support. Maybe users are limited to calling a place in a foreign country where support hours donât make sense for them or maybe the products are just too much of a hacker/maker/hobbyist solution. There needs to be an option for color printing. This could be a really expensive sprayer or ink like inkjet printers use at first We love to paint minis we make for Dungeons and Dragons but could get amazingly accurate resolutions to create amazing things with automated coloring.For a real game changer, the RecycleBot concept needs to be merged with the printer. Imagine if we dropped our plastics into a recycling bin that 3D printers of the world used to create filament. This would help reduce the amount of plastics used in the world in general. And when combined with less moving around of cheap plastic goods that could be printed at home, this also means less energy consumed by transporting goods.
The 3D printing technology is still a generation or two away from getting truly mass-marketed. Most hobbyists donât necessarily think of building an elegant, easy-to-use solution because they are so experienced itâs hard to understand what the barriers of entry are for any old person. But the company who finally manages to crack that nut might just be the next Apple, Microsoft, or Google of the world.
-
The Mogollon culture was an indigenous culture in the Western United States and Mexico that ranged from New Mexico and Arizona to Sonora, Mexico and out to Texas. They flourished from around 200 CE until the Spanish showed up and claimed their lands. The cultures that pre-existed them date back thousands more years, although archaeology has yet to pinpoint exactly how those evolved. Like many early cultures, they farmed and foraged. As they farmed more, their homes become more permanent and around 800 CE they began to create more durable homes that helped protect them from wild swings in the climate. We call those homes adobes today and the people who lived in those peublos and irrigated water, often moving higher into mountains, we call the Peubloans - or Pueblo Peoples.
Adobe homes are similar to those found in ancient cultures in what we call Turkey today. Itâs an independent evolution.
Adobe Creek was once called Arroyo de las Yeguas by the monks from Mission Santa Clara and then renamed to San Antonio Creek by a soldier Juan Prado Mesa when the land around it was given to him by the governor of Alto California at the time, Juan Bautista Alvarado. Thatâs the same Alvarado as the street if you live in the area. The creek runs for over 14 miles north from the Black Mountain and through Palo Alto, California. The ranchers built their adobes close to the creeks. American settlers led the Bear Flag Revolt in 1846, and took over the garrison of Sonoma, establishing the California Republic - which covered much of the lands of the Peubloans. There were only 33 of them at first, but after John Fremont (yes, he of whom that street is named after as well) encouraged the Americans, they raised an army of over 100 men and Fremont helped them march on Sutterâs fort, now with the flag of the United States, thanks to Joseph Revere of the US Navy (yes, another street in San Francisco bears his name).
James Polk had pushed to expand the United States. Manfiest Destiny. Remember The Alamo. Etc. The fort at Monterey fell, the army marched south. Admiral Sloat got involved. They named a street after him. General Castro surrendered - he got a district named after him. Commodore Stockton announced the US had taken all of Calfironia soon after that. Manifest destiny was nearly complete. Heâs now basically the patron saint of a city, even if few there know who he was. The forts along the El Camino Real that linked the 21 Spanish Missions, a 600-mile road once walked by their proverbial father, JunĂpero Serra following the PortolĂĄ expedition of 1769, fell. Stockton took each, moving into Los Angeles, then San Diego. Practically all of Alto California fell with few shots. This was nothing like the battles for the independence of Texas, like when Santa Anna reclaimed the Alamo Mission.
Meanwhile, the waters of Adobe Creek continued to flow. The creek was renamed in the 1850s after Mesa built an adobe on the site. Adobe Creek it was. Over the next 100 years, the area evolved into a paradise with groves of trees and then groves of technology companies. The story of one begins a little beyond the borders of California.
Utah was initialy explored by Francisco VĂĄzquez de Coronado in 1540 and settled by Europeans in search of furs and others who colonized the desert, including those who established the Church of Jesus Christ of Latter-day Saints, or the Mormons - who settled there in 1847, just after the Bear Flag Revolt. The United States officially settled for the territory in 1848 and Utah became a territory and after a number of map changes wher ethe territory got smaller, was finally made a state in 1896. The University of Utah had been founded all the way back in 1850, though - and re-established in the 1860s.
100 years later, the University of Utah was a hotbed of engineers who pioneered a number of graphical advancements in computing. John Warnock went to grad school there and then went on to co-found Adobe and help bring us PostScript. Historically, PS, or Postscript was a message to be placed at the end of a letter, following the signature of the author. The PostScript language was a language to describe a page of text computationally. It was created by Adobe when Warnock, Doug Brotz, Charles Geschke, Bill Paxton (who worked on the Mother of All Demos with Doug Englebart during the development of Online System, or NLS in the late 70s and then at Xerox PARC), and Ed Taft.
Warnock invented the Warnock algorithm while working on his PhD and went to work at Evans & Sutherland with Ivan Sutherland who effectively created the field of computer graphics. Geschke got his PhD at Carnegie Melon in the early 1970s and then went of to Xerox PARC. They worked with Paxton at PARC and before long, these PhDs and mathematicians had worked out the algorithms and then the languages to display images on computers while working on InterPress graphics at Xerox and Gerschke left Xerox and started Adobe. Warnock joined them and they went to market with Interpress as PostScript, which became a foundation for the Apple LaswerWriter to print graphics. Not only that, PostScript could be used to define typefaces programmatically and later to display any old image.
Those technologies became the foundation for the desktop publishing industry. Apple released the 1984 Mac and other vendors brought in PostScript to describe graphics in their proprietary fashion and by 1991 they released PostScript Level 2 and then PostScript 3 in 1997. Other vendors made their own or furthered standards in their own ways and Adobe could have faded off into the history books of computing. But Adobe didnât create one product, they created an industry and the company they created to support that young industry created more products in that mission.Steve Jobs tried to buy Adobe before that first Mac as released, for $5,000,000. But Warnock and Geschke had a vision for an industry in mind. They had a lot of ideas but development was fairly capital intensive, as were go to market strategies. So they went public on the NASDAQ in 1986. They expanded their PostScript distribution and sold it to companies like Texas Instruments for their laser printer, and other companies who made IBM-compatible companies. They got up to $16 million in sales that year.
Warnockâs wife was a graphic designer. This is where we see a diversity of ideas help us think about more than math. He saw how she worked and could see a world where Ivan Sutherlandâs Sketchpad was much more given how far CPUs had come since the TX-0 days at MIT. So Adobe built and released Illustrator in 1987. By 1988 they broke even on sales and it raked in $19 million in revenue. Sales were strong in the universities but PostScript was still the hot product, selling to printer companies, typesetters, and other places were Adobe signed license agreements.
At this point, we see where the math, cartesian coordinates, drawn by geometric algorithms put pixels where they should be. But while this was far more efficient than just drawing a dot in a coordinate for larger images, drawing a dot in a pixel location was still the easier technology to understand.
They created Adobe Screenline in 1989 and Collectors Edition to create patterns. They listened to graphic designers and built what they heard humans wanted.
Photoshop
Nearly every graphic designer raves about Adobe Photoshop. Thatâs because Photoshop is the best selling graphics editorial tool that has matured far beyond most other traditional solutions and now has thousands of features that allow users to manipulate images in practically any way they want.Adobe Illustrator was created in 1987 and quickly became the de facto standard in vector-based graphics. Photoshop began life in 1987 as well, when Thomas and John Knoll, wanted to build a simpler tool to create graphics on a computer. Rather than vector graphics they created a raster graphical editor.
They made a deal with Barneyscan, a well-known scanner company that managed to distribute over two hundred copies of Photoshop with their scanners and Photoshop became a hit as it was the first editing software people heard about. Vector images are typically generated with Cartesian coordinates based on geometric formulas and so scale out more easily. Raster images are comprised of a grid of dots, or pixels, and can be more realistic.
Great products are rewarded with competitions. CorelDRAW was created in 1989 when Michael Bouillon and Pat Beirne built a tool to create vector illustrations. The sales got slim after other competitors entered the market and the Knoll brothers got in touch with Adobe and licensed the product through them. The software was then launched as Adobe Photoshop 1 in 1990. They released Photoshop 2 in 1991. By now they had support for paths, and given that Adobe also made Illustrator, EPS and CMYK rasterization, still a feature in Photoshop.
They launched Adobe Photoshop 2.5 in 1993, the first version that could be installed on Windows. This version came with a toolbar for filters and 16-bit channel support. Photoshop 3 came in 1994 and Thomas Knoll created what was probably one of the most important features added, and one thatâs become a standard in graphical applications since, layers. Now a designer could create a few layers that each had their own elements and hide layers or make layers more transparent. These could separate the subject from the background and led to entire new capabilities, like an almost faux 3 dimensional appearance of graphics..
Then version four in 1996 and this was one of the more widely distributed versions and very stable. They added automation and this was later considered part of becoming a platform - open up a scripting language or subset of a language so others built tools that integrated with or sat on top of those of a product, thus locking people into using products once they automated tasks to increase human efficiency.
Adobe Photoshop 5.0 added editable type, or rasterized text. Keep in mind that Adobe owned technology like PostScript and so could bring technology from Illustrator to Photoshop or vice versa, and integrate with other products - like export to PDF by then. They also added a number of undo options, a magnetic lasso, improved color management and it was now a great tool for more advanced designers. Then in 5.5 they added a save for web feature in a sign of the times. They could created vector shapes and continued to improve the user interface.
Adobe 5 was also a big jump in complexity. Layers were easy enough to understand, but Photoshop was meant to be a subset of Illustrator features and had become far more than that. So in 2001 they released Photoshop Elements. By now they had a large portfolio of products and Elements was meant to appeal to the original customer base - the ones who were beginners and maybe not professional designers. By now, some people spent 40 or more hours a day in tools like Photoshop and Illustrator.
Adobe Today
Adobe had released PostScript, Illustrator, and Photoshop. But they have one of the most substantial portfolios of products of any company. They also released Premiere in 1991 to get into video editing. They acquired Aldus Corporation to get into more publishing workflows with PageMaker. They used that acquisition to get into motion graphics with After Effects. They acquired dozens of companies and released their products as well.Adobe also released the PDF format do describe full pages of information (or files that spread across multiple pages) in 1993 and Adobe Acrobat to use those. Acrobat became the de facto standard for page distribution so people didnât have to download fonts to render pages properly. They dabbled in audio editing when they acquired Cool Edit Pro from Syntrillium Software and so now sell Adobe Audition.
Adobeâs biggest acquisition was Macromedia in 2005. Here, they added a dozen new products to the portfolio, which included Flash, Fireworks, WYSYWIG web editor Dreamweaver, ColdFusion, Flex, and Breeze, which is now called Adobe Connect. By now, theyâd also created what we call Creative Suite, which are packages of applications that could be used for given tasks. Creative Suite also signaled a transition into a software as a service, or SaaS mindset. Now customers could pay a monthly fee for a user license rather than buy large software packages each time a new version was released.
Adobe had always been a company who made products to create graphics. They expanded into online marketing and web analytics when they bought Omniture in 2009 for $1.8 billion. These products are now normalized into the naming convention used for the rest as Adobe Marketing Cloud. Flash fell by the wayside and so the next wave of acquisitions were for more mobile-oriented products. This began with Day Software and then Nitobi in 2011. And they furthered their Marketing Cloud support with an acquisition of one of the larger competitors when they acquired Marketo in 2018 and acquiring Workfront in 2020.
Given how many people started working from home, they also extended their offerings into pure-cloud video tooling with an acquisition of Frame.io in 2021. And here we see a company started by a bunch of true computer sciencists from academia in the early days of the personal computer that has become far more. They could have been rolled into Apple but had a vision of a creative suite of products that could be used to make the world a prettier place. Creative Suite then Creative Cloud shows a move of the same tools into a more online delivery model. Other companies come along to do similar tasks, like infinite digital whiteboard Miro - so they have to innovate to stay marketable. They have to continue to increase sales so they expand into other markets like the most adjacent Marketing Cloud.
At 22,500+ employees and with well over $12 billion in revenues, they have a lot of families dependent on maintaining that growth rate. And so the company becomes more than the culmination of their software. They become more than graphic design, web design, video editing, animation, and visual effects. Because in software, if revenues donât grow at a rate greater than 10 percent per year, the company simply isnât outgrowing the size of the market and likely wonât be able to justify stock prices at an inflated earnings to price ratio that shows explosive growth. And yet once a company saturates sales in a given market they have shareholders to justify their existence to. Adobe has survived many an economic downturn and boom time with smart, measured growth and is likely to continue doing so for a long time to come.
-
Gutenburg shipped the first working printing press around 1450 and typeface was born. Before then most books were hand written, often in blackletter calligraphy. And they were expensive.
The next few decades saw Nicolas Jensen develop the Roman typeface, Aldus Manutius and Francesco Griffo create the first italic typeface. This represented a period where people were experimenting with making type that would save space.The 1700s saw the start of a focus on readability. William Caslon created the Old Style typeface in 1734. John Baskerville developed Transitional typefaces in 1757. And Firmin Didot and Giambattista Bodoni created two typefaces that would become the modern family of Serif. Then slab Serif, which we now call Antique, came in 1815 ushering in an era of experimenting with using type for larger formats, suitable for advertisements in various printed materials. These were necessary as more presses were printing more books and made possible by new levels of precision in the metal-casting.
People started experimenting with various forms of typewriters in the mid-1860s and by the 1920s we got Frederic Goudy, the first real full-time type designer. Before him, it was part of a job. After him, it was a job. And we still use some of the typefaces he crafted, like Copperplate Gothic. And we saw an explosion of new fonts like Times New Roman in 1931.
At the time, most typewriters used typefaces on the end of a metal shaft. Hit a kit, the shaft hammers onto a strip of ink and leaves a letter on the page. Kerning, or the space between characters, and letter placement were often there to reduce the chance that those metal hammers jammed. And replacing a font would have meant replacing tons of precision parts. Then came the IBM Selectric typewriter in 1961. Here we saw precision parts that put all those letters on a ball. Hit a key, the ball rotates and presses the ink onto the paper. And the ball could be replaced. A single document could now have multiple fonts without a ton of work.
Xerox exploded that same year with the Xerox 914, one of the most successful products of all time. Now, we could type amazing documents with multiple fonts in the same document quickly - and photocopy them. And some of the numbers on those fancy documents were being spat out by those fancy computers, with their tubes. But as computers became transistorized heading into the 60s, it was only a matter of time before we put fonts on computer screens.
Here, we initially used bitmaps to render letters onto a screen. By bitmap we mean that a series, or an array of pixels on a screen is a map of bits and where each should be displayed on a screen. We used to call these raster fonts, but the drawback was that to make characters bigger, we needed a whole new map of bits. To go to a bigger screen, we probably needed a whole new map of bits. As people thought about things like bold, underline, italics, guess what - also a new file. But through the 50s, transistor counts werenât nearly high enough to do something different than bitmaps as they rendered very quickly and you know, displays werenât very high quality so who could tell the difference anyways.
Whirlwind was the first computer to project real-time graphics on the screen and the characters were simple blocky letters. But as the resolution of screens and the speed of interactivity increased, so did what was possible with drawing glyphs on screens.
Rudolf Hell was a German, experimenting with using cathode ray tubes to project a CRT image onto paper that was photosensitive and thus print using CRT. He designed a simple font called Digital Grotesk, in 1968. It looked good on the CRT and the paper. And so that font would not only be used to digitize typesetting, loosely based on Neuzeit Book.
And we quickly realized bitmaps werenât efficient to draw fonts to screen and by 1974 moved to outline, or vector, fonts. Here a BĂ©zier curve was drawn onto the screen using an algorithm that created the character, or glyph using an outline and then filling in the space between. These took up less memory and so drew on the screen faster. Those could be defined in an operating system, and were used not only to draw characters but also by some game designers to draw entire screens of information by defining a character as a block and so taking up less memory to do graphics.
These were scalable and by 1979 another German, Peter Karow, used spline algorithms wrote Ikarus, software that allowed a person to draw a shape on a screen and rasterize that. Now we could graphically create fonts that were scalable.
In the meantime, the team at Xerox PARC had been experimenting with different ways to send pages of content to the first laser printers. Bob Sproull and Bill Newman created the Press format for the Star. But this wasnât incredibly flexible like what Karow would create. John Gaffney who was working with Ivan Sutherland at Evans & Sutherland, had been working with John Warnock on an interpreter that could pull information from a database of graphics. When he went to Xerox, he teamed up with Martin Newell to create J&M, which harnessed the latest chips to process graphics and character type onto printers. As it progressed, they renamed it to Interpress.
Chuck Geschke started the Imaging Sciences Laboratory at Xerox PARC and eventually left Xerox with Warnock to start a company called Adobe in Warnockâs garage, which they named after a creek behind his house. Bill Paxton had worked on âThe Mother of All Demosâ with Doug Engelbart at Stanford, where he got his PhD and then moved to Xerox PARC. There he worked on bitmap displays, laser printers, and GUIs - and so he joined Adobe as a co-founder in 1983 and worked on the font algorithms and helped ship a page description language, along with Chuck Geschke, Doug Brotz, and Ed Taft.
Steve Jobs tried to buy Adobe in 1982 for $5 million. But instead they sold him just shy of 20% of the company and got a five-year license for PostScript. This allowed them to focus on making the PostScript language more extensible, and creating the Type 1 fonts. These had 2 parts. One that was a set of bit maps And another that was a font file that could be used to send the font to a device.
We see this time and time again. The simpler an interface and the more down-market the science gets, the faster we see innovative industries come out of the work done. There were lots of fonts by now. The original 1984 Mac saw Susan Kare work with Jobs and others to ship a bunch of fonts named after cities like Chicago and San Francisco. She would design the fonts on paper and then conjure up the hex (thatâs hexadecimal) for graphics and fonts. She would then manually type the hexadecimal notation for each letter of each font.
Previously, custom fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7.
I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. Iâd certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she inspired a whole generation of innovation.
Here, we see a clean line from Ivan Sutherland and the pioneering work done at MIT to the University of Utah to Stanford through the oNLine System (or NLS) to Xerox PARC and then to Apple. But with the rise of Windows and other graphical operating systems. As Appleâs 5 year license for PostScript came and went they started developing their own font standard as a competitor to Adobe, which they called TrueType.
Here we saw Times Roman, Courier, and symbols that could replace the PostScript fonts and updating to Geneva, Monaco, and others. They may not have gotten along with Microsoft, but they licensed TrueType to them nonetheless to make sure it was more widely adopted. And in exchange they got a license for TrueImage, which was a page description language that was compatible with PostScript. Given how high resolution screens had gotten it was time for the birth of anti-aliasing. He we could clean up the blocky âjaggiesâ as the gamers call them. Vertical and horizontal lines in the 8-bit era looked fine but distorted at higher resolutions and so spatial anti-aliasing and then post-processing anti-aliasing was born.
By the 90s, Adobe was looking for the answer to TrueImage. So 1993 brought us PDF, now an international standard in ISO 32000-1:2008. But PDF Reader and other tools were good to Adobe for many years, along with Illustrator and then Photoshop and then the other products in the Adobe portfolio. By this time, even though Steve Jobs was gone, Apple was hard at work on new font technology that resulted in Apple Advanced Typography, or AAT. AAT gave us ligature control, better kerning and the ability to write characters on different axes.
But even though Jobs was gone, negotiations between Apple and Microsoft broke down to license AAT to Microsoft. They were bitter competitors and Windows 95 wasnât even out yet. So Microsoft started work on OpenType, their own font standardized language in 1994 and Adobe joined the project to ship the next generation in 1997. And that would evolve into an open standard by the mid-2000s. And once an open standard, sometimes the de facto standard as opposed to those that need to be licensed.
By then the web had become a thing. Early browsers and the wars between them to increment features meant developers had to build and test on potentially 4 or 5 different computers and often be frustrated by the results. So the WC3 began standardizing how a lot of elements worked in Extensible Markup Language, or XML. Images, layouts, colors, even fonts. SVGs are XML-based vector image. In other words the browser interprets a language that displays the image. That became a way to render
Web Open Format or WOFF 1 was published in 2009 with contributions by Dutch educator Erik van Blokland, Jonathan Kew, and Tal Leming. This built on the CSS font styling rules that had shipped in Internet Explorer 4 and would slowly be added to every browser shipped, including Firefox since 3.6, Chrome since 6.0, Internet Explorer since 9, and Appleâs Safari since 5.1. Then WOFF 2 added Brotli compression to get sizes down and render faster. WOFF has been a part of the W3C open web standard since 2011.
Out of Appleâs TrueType came TrueType GX, which added variable fonts. Here, a single font file could contain a number or range of variants to the initial font. So a family of fonts could be in a single file. OpenType added variable fonts in 2016, with Apple, Microsoft, and Google all announcing support. And of course the company that had been there since the beginning, Adobe, jumped on board as well. Fewer font files, faster page loads.
So here weâve looked at the progression of fonts from the printing press, becoming more efficient to conserve paper, through the advent of the electronic typewriter to the early bitmap fonts for screens to the vectorization led by Adobe into the Mac then Windows. We also see rethinking the font entirely so multiple scripts and character sets and axes can be represented and rendered efficiently.
I am now converting all my user names into pig Latin for maximum security. Luckily those are character sets that are pretty widely supported. The ability to add color to pig Latin means that OpenType-SVG will allow me add spiffy color to my glyphs. It makes us wonder whatâs next for fonts. Maybe being able to design our own, or more to the point, customize those developed by others to make them our own. We didnât touch on emoji yet. But weâll just have to save the evolution of character sets and emoji for another day.
In the meantime, letâs think on the fact that fonts are such a big deal because Steve Jobs took a caligraphy class from a Trappist monk named Robert Palladino while enrolled at Reed College. Today we can painstakingly choose just the right font with just the right meaning because Palladino left the monastic life to marry and have a son. He taught jobs about serif and san serif and kerning and the art of typography.
That style and attention to detail was one aspect of the original Mac that taught the world that computers could have style and grace as well. Itâs not hard to imagine if entire computers still only supported one font or even one font per document. Palladino never owned or used a computer though. His influence can be felt through the influence his pupil Jobs had. And itâs actually amazing how many people who had such dramatic impacts on computing never really used one. Because so many smaller evolutions came after them. What evolutions do we see on the horizon today? And how many who put a snippet of code on a service like GitHub may never know the impact they have on so many?
-
In our previous episode, we looked at the history of flight - from dinosaurs to the modern aircraft that carry people and things all over the world. Those helped to make the world smaller, but UAVs and drones have had a very different impact in how we lead our lives - and will have an even more substantial impact in the future. That might not have seemed so likely in the 1700s, though - when unmann
Unmanned Aircraft
Napoleon conquered Venice in 1797 and then ceded control to the Austrians the same year. He then took it as part of a treaty in 1805 and established the first Kingdom of Italy. Then lost it in 1814. And so they revolted in 1848. One of the ways the Austrians crushed the revolt, in part employing balloons, which had been invented in 1783, that were packed with explosives. 200 balloons packed with bombs later, one found a target. Not a huge surprise that such techniques didnât get used again for some time. The Japanese tried a similar tactic to bomb the US in World War II - then there were random balloons in the 2020s, just for funsies.A few other inventions needed to find one another in order to evolve into something entirely new. Radio was invented in the 1890s. Nikola Tesla built a radio controlled boat in 1898. Airplanes came along in 1903. Then came airships moved by radio. So it was just a matter of time before the cost of radio equipment came down enough to match the cost of building smaller airplanes that could be controlled with remote controls as well.
The first documented occurrence of that was in 1907 when Percy Sperry filed a patent for a kite fashioned to look and operate like a plane, but glide in the wind. The kite string was the first remote control. Then electrical signals went through those strings and eventually the wire turned into radio - the same progress we see with most manual machinery that needs to be mobile.
Technology moves upmarket, so Sperry Corporation the aircraft with autopilot features in 1912. At this point, that was just a gyroscopic heading indicator and attitude indicator that had been connected to hydraulically operated elevators and rudders but over time would be able to react to all types of environmental changes to save pilots from having to constantly manually react while flying. That helped to pave the way for longer and safer flights, as automation often does.
Then came World War I. Tesla discussed aerial combat using unmanned aircraft in 1915 and Charles Kettering (who developed the electric cash register and the electric car starter) gave us The Kettering Bug, a flying, remote controlled torpedo of sorts. Elmer Sperry worked on a similar device. British war engineers like Archibald Low were also working on attempts but the technology didnât evolve fast enough and by the end of the war there wasnât much interest in military funding.
But a couple of decades can do a lot. Both for miniaturization and maturity of technology. 1936 saw the development of the first navy UAV aircraft by the name of Queen Bee by Admiral William H. Stanley then the QF2. They was primarily used for aerial target practice as a low-cost radio-controlled drone. The idea was an instant hit and later on, the military called for the development of similar systems, many of which came from Hollywood of all places.
Reginald Denny was a British gunner in World War I. They shot things from airplanes. After the war he moved to Hollywood to be an actor. By the 1930s he got interested in model airplanes that could fly and joined up with Paul Whittier to open a chain of hobby shops. He designed a few planes and eventually grew them to be sold to the US military as targets. The Radioplane as they would be known even got joysticks and they sold tens of thousands during World War II.
War wasnât the only use for UAVs. Others were experimenting and by 1936 we got the first radio controlled model airplane competition in 1936, a movement that continued to grow and evolve into the 1970s. We got the Academy of Model Aeronautics (or AMA) in 1936, who launched a magazine called Model Aviation and continues to publish, provide insurance, and act as the UAV, RC airplane, and drone community representative to the FAA. Their membership still runs close to 200,000.
Most of these model planes were managed from the ground using radio remote controls.
The Federal Communications Commission, or FCC, was established in 1934 to manage the airwaves. They stepped in to manage what frequencies could be used for different use cases in the US, including radio controlled planes.Where there is activity, there are stars. The Big Guff, built by brothers Walt and Bill Guff, was the first truly successful RC airplane in that hobbiest market. Over the next decades solid state electronics got smaller, cheaper, and more practical. As did the way we could transmit bits over those wireless links.
1947 saw the first radar-guided missile, the subsonic Firebird, which over time evolved into a number of programs. Electro-mechanical computers had been used to calculate trajectories for ordinances during World War II so with knowledge of infrared, we got infrared homing then television cameras mounted into missiles and when combined with the proximity fuse, which came with small pressure, magnetic, acoustic, radio, then optical transmitters. We got much better at blowing things up.
Part of that was studying the German V-2 rocket programs. They used an analog computer to control the direction and altitude of missiles. The US Polaris and Minuteman missile programs added transistors then microchips to missiles to control the guidance systems. Rockets had computers and so they showed up in airplanes to aid humans in guiding those, often replacing Sperryâs original gyroscopic automations. The Apollo Guidance Computer from the 1969 moon landing was an early example of times when humans even put their lives in the hands of computers - with manual override capabilities of course. Then as the price of chips fell in the 1980s we started to see them in model airplanes.
Modern Drones
By now, radio controlled aircraft had been used for target practice, to deliver payloads and blow things up, and even for spying. Aircraft without humans to weight them down could run on electric motors rather than combustable engines. Thus they were quieter. This technology allowed the UAVs to fly undetected thus laying the very foundation for the modern depiction of drones used by the military for covert operations.As the costs fell and carrying capacity increased, we saw them used in filmmaking, surveying, weather monitoring, and anywhere else a hobbyist could use their hobby in their career. But the cameras werenât that great yet. Then Fairchild developed the charge-coupled device, or CCD, in 1969. The first digital camera arguably came out of Eastman Kodak in 1975 when Steven Sasson built a prototype using a mixture of batteries, movie camera lenses, Fairchild CCD sensors, and Motorola parts. Sony came out with the Magnetic Video Camera in 1981 and Canon put the RC701 on the market in 1986. Fuji, Dycam, even the Apple QuickTake, came out in the next few years. Cameras were getting better resolution, and as we turned the page into the 1990s, those cameras got smaller and used CompactFlash to store images and video files.
The first aerial photograph is attributed to Gaspar Tournachon, but the militaries of the world used UAVs that were B-17 and Grumman Hellcats from World War II that had been converted to drones full of sensors to study nuclear radiation clouds when testing weapons. Those evolved into Reconnaisance drones like the Aerojet SD-2, with mounted analog cameras in the 50s and 60s. During that time we saw the Ryan Firebees and DC-130As run thousands of flights snapping photos to aid intelligence gathering.
Every country was in on it. The USSR, Iran, North Korea, Britain. And the DARPA-instigated Amber and then Predator drones might be considered the modern precursor to drones we play with today. Again, we see the larger military uses come down market once secrecy and cost meet a cool factor down-market. DARPA spent $40 million on the Amber program. Manufacturers of consumer drones have certainly made far more than that.
Hobbyists started to develop Do It Yourself (DIY) drone kits in the early 2000s. Now that there were websites, we didnât have to wait for magazines to show up, we could take to the World Wide Web forums and trade ideas for how to do what the US CIA had done when they conducted the first armed drone strike in 2001 - just maybe without the weapon systems since this was in the back yard.
Lithium-ion batteries were getting cheaper and lighter. As were much faster chips. Robotics had come a long way as well, and moving small parts of model aircraft was much simpler than avoiding all the chairs in a room at Stanford. Hobbyists turned into companies that built and sold drones of all sizes, some of which got in the way of commercial aircraft. So the FAA started issuing drone permits in 2006.
Every technology had a point, where the confluence of all these technologies meets into a truly commercially viable product. We had Wi-Fi, RF (or radio frequency), iPhones, mobile apps, tiny digital cameras in our phones, and even in spy teddy bears, we understood flight, propellers, plastics were heavier-than-air, but lighter than metal. So in 2010 we got the Parrot AR Drone. This was the first drone that was sold to the masses that was just plug and play. And an explosion of drone makers followed, with consumer products ranging from around $20 to hundreds now. Drone races, drone aerogymnastics, drone footage on our Apple and Google TV screens, and with TinyML projects for every possible machine learning need we can imagine, UAVs that stabilize cameras, can find objects based on information we program into it, and any other use we can imagine.
The concept of drones or unmanned aerial vehicles (UAV) has come a long way since the Austrians tried to bomb the Venetians into submission. Today there are mini drones, foldable drones, massive drones that can carry packages, racing drones, and even military drones programmed to kill. In fact, right now there are debates raging in the UN around whether to allow drones to autonomously kill. Because Skynet.
Weâre also experimenting with passenger drone technology. Because autonomous driving is another convergence just waiting in the wings. Imagine going to the top of a building and getting in a small pod then flying a few buildings over - or to the next city. Maybe in our lifetimes, but not as soon as some of the companies who have gone public to do just this thought.
-
Humans have probably considered flight since they found birds. As far as 228 million years ago, the Pterosaurs used flight to reign down onto other animals from above and eat them. The first known bird-like dinosaur was the Archaeopteryx, which lived around 150 million years ago. Itâs not considered an ancestor of modern birds - but other dinosaurs from the same era, the theropods, are. 25 million years later, in modern China, the Confuciusornis sanctus had feathers and could have flown. The first humans wouldnât emerge from Africa until 23 million years later. By the 2300s BCE, the Summerians depicted shepherds riding eagles, as humanity looked to the skies in our myths and legends. These were creatures, not vehicles.
The first documented vehicle of flight was as far back as the 7th century BCE when the RÄmÄyana told of the Pushpaka VimÄna, a palace made by Vishwakarma for Brahma, complete with chariots that flew the king Rama high into the atmosphere. The Odyssey was written around the same time and tells of the Greek pantheon of Gods but doesnât reference flight as we think of it today. Modern interpretations might move floating islands to the sky, but it seems more likely that the floating island of Aeollia is really the islands off Aeolis, or Anatolia, which we might refer to as the modern land of Turkey.
Greek myths from a few hundred years later introduced more who were capable of flight. Icarus flew into the sun with wings that had been fashioned by Daedalus. By then, they could have been aware, through trade routes cut by Alexander and later rulers, of kites from China. The earliest attempts at flight trace their known origins to 500 BCE in China. Kites were, like most physical objects, heavier than air and could still be used to lift an object into flight. Some of those early records even mention the ability to lift humans off the ground with a kite. The principle used in kites was used later in the development of gliders and then when propulsion was added, modern aircraft. Any connection between any of these is conjecture as we canât know how well the whisper net worked in those ages.
Many legends are based on real events. The history of humanity is vast and many of our myths are handed down through the generations. The Greeks had far more advanced engineering capabilities than some of the societies that came after. They were still weary of what happened if they flew too close to the sun. In fact, emperors of China are reported to have forced some to leap from cliffs on a glider as a means of punishment. Perhaps that was where the fear of flight for some originated from. Chinese emperor Wang Mang used a scout with bird features to glide on a scouting mission around the same time as the Icarus myth might have been documented. Whether this knowledge informed the storytellers Ovid documented in his story of Icarus is lost to history, since he didnât post it to Twitter.
Once the Chinese took the string off the kite and they got large enough to fly with a human, they had also developed hang gliders. In the third century BCE, Chinese inventors added the concept of rotors for vertical flight when they developed helicopter-style toys. Those were then used to frighten off enemies. Some of those evolved into the beautiful paper lanterns that fly when lit.There were plenty of other evolutions and false starts with flight after that. Abbas ibn Ferns also glided with feathers in the 9th century. A Benedictine monk did so again in the 11th century. Both were injured when they jumped out of towers in the Middle Ages that spanned the Muslim Golden Age to England.
Leonardo da Vinci studied flight for much of his life. His studies produced another human-power ornithopter and other contraptions; however he eventually realized that humans would not be able to fly on their own power alone. Others attempted the same old wings made of bird feathers, wings that flapped on the arms, wings tied to legs, different types of feathers, finding higher places to jump from, and anything they could think of. Many broke bones, which continued until we found ways to supplement human power to propel us into the air. Then a pair of brothers in the Ottoman Empire had some of the best luck. Hezarafen Ahmed Ăelebi crossed the Bosphorus strait on a glider. That was 1633, and by then gunpowder already helped the Ottomans conquer Constantinople. That ended the last vestiges of ancient Roman influence along with the Byzantine empire as the conquerers renamed the city to Instanbul. That was the power of gunpowder. His brother then built a rocket using gunpowder and launched himself high in the air, before he glided back to the ground.
The next major step was the hot air balloon. The modern hot air balloon was built by the Montgolfier brothers in France and first ridden in 1783 and (Petrescu & Petrescu, 2013). 10 days later, the first gas balloon was invented by Nicholas Louis Robert and Jacques Alexander Charles. The gas balloon used hydrogen and in 1785, used to cross the English Channel. That trip sparked the era of dirigibles. We built larger balloons to lift engines with propellers. That began a period that culminated with the Zeppelin. From the 1700s and on, much of what da Vinci realized was rediscovered, but this time published, and the body of knowledge built out. The physics of flight were then studied as new sciences emerged. Sir George Cayley started to actually apply physics to flight in the 1790s.
Powered Flight
We see this over and over in history; once we understand the physics and can apply science, progress starts to speed up. That was true when Archimedes defined force multipliers with the simple machines in the 3rd century BCE, true with solid state electronics far later, and true with Cayleyâs research. Cayley conducted experiments, documented his results, and proved hypotheses. He finally got to codifying bird flight and why it worked. He studied the Chinese tops that worked like modern helicopters. He documented glided flight and applied math to why it worked. He defined drag and measured the force of windmill blades. In effect, he got to the point that he knew how much power was required based on the ratio of weight to actually sustain flight. Then to achieve that, he explored the physics of fixed-wing aircraft, complete with an engine, tail assembly, and fuel. His work culminated in a work called âOn Aerial Navigationâ that was published in 1810.By the mid-1850s, there was plenty of research that flowed into the goal for sustained air travel. Ideas like rotors led to rotor crafts. Those were all still gliding. Even with Cayleyâs research, we had triplane gliders, gliders launched from balloons. After that, the first aircrafts that looked like the modern airplanes we think of today were developed. Cayleyâs contributions were profound. He even described how to mix air with gasoline to build an engine. Influenced by his work, others built propellers. Some of those were steam powered and others powered by tight springs, like clockworks. Aeronautical societies were created, wing counters and cambering were experimented with, and wheels were added to try to lift off. Some even lifted a little off the ground. By the 1890s, the first gasoline powered biplane gliders were developed and flown, even if those early experiments crashed. Humanity was finally ready for powered flight.
The Smithsonian housed some of the earliest experiments. They hired their third director, Samuel Langley, in 1887. He had been interested in aircraft for decades and as with many others had studied the Cayley work closely. He was a consummate tinkerer and had already worked in solar physics and developed the Allegheny Time System. The United States War department gave him grants to pursue his ideas to build an airplane. By then, there was enough science that humanity knew it was possible to fly and so there was a race to build powered aircraft. We knew the concepts of drag, rudders, thrust from some of the engineering built into ships. Some of that had been successfully used in the motorcar. We also knew how to build steam engines, which is what he used in his craft. He called it the Aerodrome and built a number of models. He was able to make it further than anyone at the time. He abandoned flight in 1903 when someone beat him to the finish line.
Thatâs the year humans stepped beyond gliding and into the first controlled, sustained, and powered flight. There are reports that Gustave Whitehead beat the Wright Brothers, but he didnât keep detailed notes or logs, and so the Wrights are often credited with the discovery. They managed to solve the problem of how to roll, built steerable rudders, and built the first biplane with an internal combustion engine. They flew their first airplane out of North Carolina when Orville Wright went 120 feet and his brother went 852 feet later that day. That plane now lives at the National Air and Space Museum in Washington DC and December 17th, 1903 represents the start of the age of flight.
The Wrightâs spent two years testing gliders and managed to document their results. They studied in wind tunnels, tinkered with engines, and were methodical if not scientific in their approach. They didnât manage to have a public demonstration until 1908 though and so there was a lengthy battle over the patents they filed. Turns out it was a race and there were a lot of people who flew within months of one another. Decades of research culminated into what had to be: airplanes. Innovation happened quickly. Flight improved enough that planes could cross English Channel by 1909. There were advances after that, but patent wars over the invention drug on and so investors stayed away from the unproven technology.
Flight for the Masses
The superpowers of the world were at odds for the first half of the 1900s. An Italian pilot flew a reconnaissance mission in Libya in the Italo-Turkish war in 1911. It took only 9 days before they went from just reconnaissance and dropped grenades on Turkish troops from the planes. The age of aerial warfare had begun. The Wrights had received an order for the first plane from the military back in 1908. Military powers took note and by World War I there was an air arm of every military power. Intelligence wins wars. The innovation was ready for the assembly lines, so during and after the war, the first airplane manufacturers were born.Dutch engineer Anthony Fokker was inspired by Wilbur Wrightâs exhibition in 1908. He went on to start a company and design the Fokker M.5, which evolved into the Fokker E.I. after World War I broke out in 1914. They mounted a machine gun and synchronized it to the propeller in 1915. Manfred von Richthofen, also known as the Red Baron, flew one before he upgraded to the Fokker D.VII and later an Albatros. Fokker made it all the way into the 1990s before they went bankrupt. Albatros was founded in 1909 by Enno Huth, who went on to found the German Air Force before the war.
The Bristol Aeroplane Company was born in 1910 after Sir George White, who was involved in transportation already, met Wilbur Wright in France. Previous companies were built to help hobbyists, similar to how many early PC companies came from inventors as well. This can be seen with people like Maurice Mallet, who helped design gas balloons and dirigibles. He licensed airplane designs to Bristol who later brought in Frank Barnwell and other engineers that helped design the Scout. They based the Bristol Fighters that were used in World War I on those designs. Another British manufacturer was Sopwith, started by Thomas Sopwith, who taught himself to fly and then started a company to make planes. They built over 16,000 by the end of the war. After the war they pivoted to make ABC motorcycles and eventually sold to Hawker Aircraft in 1920, which later sold to Raytheon.
The same paradigm played out elsewhere in the world, including the United States. Once those patent disputes were settled, plenty knew flight would help change the world. By 1917 the patent wars in the US had to end as the countries contributions to flight suffered. No investor wanted to touch the space and so there was a lack of capital to expand. Orville Write passed away in 1912 and Wilbur sold his rights to the patents, so the Assistant Secretary of the Navy, Franklin D. Roosevelt, stepped in and brought all the parties to the table to develop a cross-licensing organization. After almost 25 years, we could finally get innovation in flight back on track globally. In rapid succession, Loughead Aircraft, Lockheed, and Douglas Aircraft were founded. Then Jack Northrop left those and started his own aircraft company. Boeing was founded in 1957 as Aero Products and then United Aircraft, which was spun off into United Airlines as a carrier in the 1930s with Boeing continuing to make planes.
United was only one of many a commercial airline that was created. Passenger air travel started after the first air flights with the first airline ferrying passengers in 1914. With plenty of airplanes assembled at all these companies, commercial travel was bound to explode into its own big business. Delta started as a cropdusting service in Macon, Georgia in 1925 and has grown into an empire. The worlds largest airline at the time of this writing is American Airlines, which started in 1926 when a number of smaller airlines banded together. Practically every country had at least one airline. Pan American (Panam for short) in 1927, Ryan Air started in 1926, Slow-Air in 1924, Finnair in 1923, Quantus in 1920, KLM in 1919, and the list goes on. Enough that the US passed the Air Commerce Act in 1926, which over time led to the department of Air Commerce, which evolved into the Federal Aviation Administration, or FAA we know today.
Aircrafts were refined and made more functional. World War I brought with it the age of aerial combat. Plenty of supply after the war and then the growth of manufacturers Brough further innovation to compete with one another, and commercial aircraft and industrial uses (like cropdusting) enabled more investment into R&D
In 1926, the first flying boat service was inaugurated from New York to Argentina. Another significant development in aviation was in the 1930s when the jet engine was invented. This invention was done by Frank Whittle who registered a turbojet engine patent. A jet plane was also developed by Hans von Ohain and was called the Heinkel He 178 (Grant, 2017). The plane first flew in 1939, but the Whittle jet engine is the ancestor of those found in planes in World War II and beyond. And from there to the monster airliners and stealth fighters or X-15 becomes a much larger story. The aerospace industry continued to innovate both in the skies and into space.
The history of flight entered another phase in the Cold War. Rand corporation developed the concept of Intercontinental Ballistic Missiles (or ICBMs) and the Soviet Union launched the first satellite into space in 1957. Then in 1969, Neil Armstrong and Buzz Aldrin made the first landing on the moon and we continued to launch into space throughout the 1970s to 1990s, before opening up space travel to private industry. Those projects got bigger and bigger and bigger. But generations of enthusiasts and engineers were inspired by devices far smaller, and without pilots in the device.
-
Computing has totally changed how people buy and experience travel. That process seemed to start with sites that made it easy to book travel, but as with most things we experience in our modern lives, it actually began far sooner and moved down-market as generations of computing led to more consumer options for desktops, the internet, and the convergence of these technologies. Systems like SABRE did the original work to re-think travel - to take logic and rules out of the heads of booking and travel agents and put them into a digital medium. In so doing, they paved the way for future generations of technology and to this day retain a valuation of over $2 billion.
SABRE is short for Semi-Automated Business Research Environment. Itâs used to manage over a third of global travel, to the tune of over a quarter trillion US dollars a year. Itâs used by travel agencies and travel services to reserve car rentals, flights, hotel rooms, and tours. Since Sabre was released services like Amadeus and Travelport were created to give the world a Global Distribution System, or GDS.
Passenger air travel began when airlines ferrying passengers cropped up in 1914 but the big companies began in the 1920s, with KLM in 1919, Finnair in 1923, Delta in 1925, American Airlines and Ryan Air in 1926, Pan American in 1927, and the list goes on. They grew quickly and by 1926 the Air Commerce Act led to a new department in the government called Air Commerce, which evolved into the FAA, or Federal Aviation Administration in the US. And each country, given the possible dangers these aircraft posed as they got bigger and loaded with more and more fuel, also had their own such departments. The aviation industry blossomed in the roaring 20s as people traveled and found romance and vacation. At the time, most airlines were somewhat regional and people found travel agents to help them along their journey to book travel, lodgings, and often food. The travel agent naturally took over air travel much as theyâd handled sea travel before.But there were dangers in traveling in those years between the two World Wars. Nazis rising to power in Germany, Mussolini in Italy, communist cleansings in Russia and China. Yet, a trip to the Great Pyramid of Giza could now be a week instead of months. Following World War II, there was a fracture in the world between Eastern and Western powers, or those who aligned with the former British empire and those who aligned with the former Russian empire, now known as the Soviet Union. Travel within the West exploded as those areas were usually safe and often happy to accept the US dollar. Commercial air travel boomed not just for the wealthy, but for all. People had their own phones now, and could look up a phone number in a phone book and call a travel agent.
The travel agents then spent hours trying to build the right travel package. That meant time on the phone with hotels and time on the phone with airlines. Airlines like American head. To hire larger and larger call centers of humans to help find flights. We didnât just read about Paris, we wanted to go. Wars had connected the world and now people wanted to visit the places theyâd previously just seen in art books or read about in history books. But those call centers grew. A company like American Airlines couldnât handle all of its ticketing needs and the story goes that the CEO was sitting beside a seller from IBM when they came up with the idea of a computerized reservation system.
And so SABRE was born in the 1950s, when American Airlines agreed to develop a real-time computing platform. Here, we see people calling in and pressing buttons to run commands on computers. The tones werenât that different than a punch card, really. The system worked well enough for American that they decided to sell access to other firms. The computers used were based loosely after the IBM mainframes used in the SAGE missile air defense system. Here we see the commercial impacts of the AN/FSQ-7 the US government hired IBM to build as IBM added the transistorized options to the IBM 704 mainframe in 1955. That gave IBM the interactive computing technology that evolved into the 7000 series mainframes.
Now that IBM had the interactive technology, and a thorough study had been done to evaluate the costs and impacts of a new reservation system, American and IBM signed a contract to build the system in 1957. They went live to test reservation booking shortly thereafter. But it turns out there was a much bigger opportunity here. See, American and other airlines had paper processes to track how many people were on a flight and quickly find open seats for passengers, but it could take an hour or three to book tickets. This was fairly common before software ate the world. Everything from standing in line at the bank, booking dinner at a restaurant, reserving a rental car, booking hotel rooms, and the list goes on.
There were a lot of manual processes in the world - people werenât just going to punch holes in a card to program their own flight and wait for some drum storage to tell them if there was an available seat. That was the plan American initially had in 1952 with the Magnetronic Reservisor. That never worked out. American had grown to one of the largest airlines and knew the perils and costs of developing software and hardware like this. Their system cost $40 million in 1950s money to build with IBM. They also knew that as other airlines grew to accommodate more people flying around the world, that the more flights, the longer that hour or three took. So they should of course sell the solution they built to other airlines.
Thus, parlaying the SAGE name, famous as a Cold War shield against the nuclear winter, Sabre Corporation began. It was fairly simple at first, with a pair of IBM 7090 mainframes that could take over 80,000 calls a day in 1960. Some travel agents werenât fans of the new system, but those who embraced it found they could get more done in less time. Sabre sold reservation systems to airlines and soon expanded to become the largest data-processor in the world. Far better than the Reservisor would have been and now able to help bring the whole world into the age of jet airplane travel.
That exploded to thousands of flights an hour in the 1960s and even turned over all booking to the computer. The system got busy and over the years IBM upgraded the computers to the S/360. They also began to lease systems to travel agencies in the 1970s after Max Hopper joined the company and began the plan to open up the platform as TWA had done with their PARS system. Then they went international, opened service bureaus in other cities (given that we once had to pay for a toll charge to call a number). And by the 1980s Sabre was how the travel agents booked flights. The 1980s brought easysabjre, so people could use their own computers to book flights and by then - and through to the modern era, a little over a third of all reservations are made on Sabre.
By the mid-1980s, United had their own system called Apollo, Delta had one called Datas, and other airlines had their own as well. But SABRE could be made to be airline neutral. IBM had been involved with many American competitors, developing Deltamatic for Delta, PANAMAC for Pan Am, and other systems. But SABRE could be hooked to thee new online services for a whole new way to connect systems. One of these was CompuServe in 1980, then Prodigyâs GEnie and AOL as we turned the corner into the 1990s. Then they started a site called Travelocity in 1996 which was later sold to Expedia.
In the meantime, they got serious competition, which eventually led to a slew of acquisitions to remain compeititve. The competition included Amadeus, Galileo International, and Worldspan on provider in the Travelport GDS. The first of them originated from United Airlines, and by 1987 was joined by Aer Lingus, Air Portugal, Alitalia, British Airways, KLM, Olympic, Sabena, and Swissair to create Galileo, which was then merged with the Apollo reservation system. The technology was acquired through a company called Videcom International, which initially started developing reservation software in 1972, shortly after the Apollo and Datas services went online. They focused on travel agents and branched out into reservation systems of all sorts in the 1980s. As other systems arose they provided an aggregation to them by connecting to Amadeus, Galileo, and Worldspan.
Amadeus was created in 1987 to be a neutral GDS after the issues with Sabre directing reservations to American Airlines. That was through a consortium of Air France, Iberia, Lufthansa, and SAS. They acquired the assets of the bankrupt System One and they eventually added other travel options including hotels, cars rentals, travel insurance, and other amenities. They went public in 1999 just before Sabre did and then were also taken private just before Sabre was.
Worldspan was created in 1990 and the result of merging or interconnecting the systems of Delta, Northwest Airlines, and TWA, which was then acquired by Travelport in 2007. By then, SABRE had their own programming languages. While the original Sabre languages were written in assembly, they wrote their own language on top of C and C++ called SabreTalk and later transitioned to standard REST endpoints. They also werenât a part of American any longer. There were too many problems with manipulating how flights were displayed to benefit American Airlines and they had to make a clean cut. Especially after Congress got involved in the 1980s and outlawed that type of bias for screen placement.
Now that they were a standalone company, Sabre went public then was taken private by private equity firms in 2007, and relisted on NASDAQ in 2014. Meanwhile, travel aggregators had figured out they could hook into the GDS systems and sell discount airfare without a percentage going to travel agents. Now that the GDS systems werenât a part of the airlines, they were able to put downward pressure on prices. Hotwire, which used Sabre and a couple of other systems, and TripAdvisor, which booked travel through Sabre and Amadeus, were created in 2000 and Microsoft launched Expedia in 1996, which had done well enough to get spun off into its own public company by 2000. Travelocity operated inside Sabre until sold, and so the airlines put together a site of their own that they called Orbitz, which in 2001 was the biggest e-commerce site to have ever launched. And out of the bursting of the dot com bubble came online travel bookings. Kayak came in 2004
Sabre later sold Travelocity to Expedia, which uses Sabre to book travel. That allowed Sabre to focus on providing the back end travel technology. They now do over $4 billion in revenue in their industry. American Express had handled travel for decades but also added flights and hotels to their site, integrating with Sabre and Amadeus as well.
Here, we see a classic paradigm in play. First the airlines moved their travel bookings from paper filing systems to isolated computer systems - what weâd call mainframes today. The airlines then rethink the paradigm and aggregate other information into a single system, or a system intermixed with other data. In short, they enriched the data. Then we expose those as APIs to further remove human labor and put systems on assembly lines. Sites hook into those and the GDS systems, as with many aggregators, get spun off into their own companies. The aggregated information then benefits consumers (in this case travelers) with more options and cheaper fares. This helps counteract the centralization of the market where airlines acquire other airlines but in some way also cheapen the experience. Gone are the days when a travel agent guides us through our budgets and helps us build a killer itinerary. But in a way that just makes travel much more adventurous.
-
Weâve talked about the history of microchips, transistors, and other chip makers. Today weâre going to talk about Intel in a little more detail.
Intel is short for Integrated Electronics. They were founded in 1968 by Robert Noyce and Gordon Moore. Noyce was an Iowa kid who went off to MIT to get a PhD in physics in 1953. He went off to join the Shockley Semiconductor Lab to join up with William Shockley whoâd developed the transistor as a means of bringing a solid-state alternative to vacuum tubes in computers and amplifiers.
Shockley became erratic after he won the Nobel Prize and 8 of the researchers left, now known as the âtraitorous eight.â Between them came over 60 companies, including Intel - but first they went on to create a new company called Fairchild Semiconductor where Noyce invented the monolithic integrated circuit in 1959, or a single chip that contains multiple transistors.
After 10 years at Fairchild, Noyce joined up with coworker and fellow traitor Gordon Moore. Moore had gotten his PhD in chemistry from Caltech and had made an observation while at Fairchild that the number of transistors, resistors, diodes, or capacitors in an integrated circuit was doubling every year and so coined Mooreâs Law, that it would continue to to do so. They wanted to make semiconductor memory cheaper and more practical.
They needed money to continue their research. Arthur Rock had helped them find a home at Fairchild when they left Shockley and helped them raise $2.5 million in backing in a couple of days.
The first day of the company, Andy Grove joined them from Fairchild. Heâd fled the Hungarian revolution in the 50s and gotten a PhD in chemical engineering at the University of California, Berkeley. Then came Leslie VadĂĄsz, another Hungarian emigrant. Funding and money coming in from sales allowed them to hire some of the best in the business. People like Ted Hoff , Federico Faggin, and Stan Mazor.
That first year they released 64-bit static random-access memory in the 3101 chip, doubling what was on the market as well as the 3301 read-only memory chip, and the 1101. Then DRAM, or dynamic random-access memory in the 1103 in 1970, which became the bestselling chip within the first couple of years.
Armed with a lineup of chips and an explosion of companies that wanted to buy the chips, they went public within 2 years of being founded. 1971 saw Dov Frohman develop erasable programmable read-only memory, or EPROM, while working on a different problem. This meant they could reprogram chips using ultraviolet light and electricity.
In 1971 they also created the Intel 4004 chip, which was started in 1969 when a calculator manufacturer out of Japan ask them to develop 12 different chips. Instead they made one that could do all of the tasks of the 12, outperforming the ENIAC from 1946 and so the era of the microprocessor was born. And instead of taking up a basement at a university lab, it took up an eight of an inch by a sixth of an inch to hold a whopping 2,300 transistors. The chip didnât contribute a ton to the bottom line of the company, but theyâd built the first true microprocessor, which would eventually be what they were known for.
Instead they were making DRAM chips. But then came the 8008 in 1972, ushering in an 8-bit CPU. The memory chips were being used by other companies developing their own processors but they knew how and the Computer Terminal Corporation was looking to develop what was a trend for a hot minute, called programmable terminals. And given the doubling of speeds those gave way to microcomputers within just a few years.
The Intel 8080 was a 2 MHz chip that became the basis of the Altair 8800, SOL-20, and IMSAI 8080. By then Motorola, Zilog, and MOS Technology were hot on their heals releasing the Z80 and 6802 processors. But Gary Kildall wrote CP/M, one of the first operating systems, initially for the 8080 prior to porting it to other chips.
Sales had been good and Intel had been growing. By 1979 they saw the future was in chips and opened a new office in Haifa, Israiel, where they designed the 8088, which clocked in at 4.77 MHz. IBM chose this chip to be used in the original IBM Personal Computer. IBM was going to use an 8-bit chip, but the team at Microsoft talked them into going with the 16-bit 8088 and thus created the foundation of what would become the Wintel or Intel architecture, or x86, which would dominate the personal computer market for the next 40 years.
One reason IBM trusted Intel is that they had proven to be innovators. They had effectively invented the integrated circuit, then the microprocessor, then coined Mooreâs Law, and by 1980 had built a 15,000 person company capable of shipping product in large quantities. They were intentional about culture, looking for openness, distributed decision making, and trading off bureaucracy for figuring out cool stuff.
That IBM decision to use that Intel chip is one of the most impactful in the entire history of personal computers. Based on Microsoft DOS and then Windows being able to run on the architecture, nearly every laptop and desktop would run on that original 8088/86 architecture. Based on the standards, Intel and Microsoft would both market that their products ran not only on those IBM PCs but also on any PC using the same architecture and so IBMâs hold on the computing world would slowly wither.
On the back of all these chips, revenue shot past $1 billion for the first time in 1983. IBM bought 12 percent of the company in 1982 and thus gave them the Big Blue seal of approval, something important event today. And the hits kept on coming with the 286 to 486 chips coming along during the 1980s.
Intel brought the 80286 to market and it was used in the IBM PC AT in 1984. This new chip brought new ways to manage addresses, the first that could do memory management, and the first Intel chip where we saw protected mode so we could get virtual memory and multi-tasking. All of this was made possible with over a hundred thousand transistors. At the time the original Mac used a Motorola 68000 but the sales were sluggish while they flourished at IBM and slowly we saw the rise of the companies cloning the IBM architecture, like Compaq. Still using those Intel chips.
Jerry Sanders had actually left Fairchild a little before Noyce and Moore to found AMD and ended up cloning the instructions in the 80286, after entering into a technology exchange agreement with Intel. This led to AMD making the chips at volume and selling them on the open market. AMD would go on to fast-follow Intel for decades.
The 80386 would go on to simply be known as the Intel 386, with over 275,000 transistors. It was launched in 1985, but we didnât see a lot of companies use them until the early 1990s. The 486 came in 1989. Now we were up to a million transistors as well as a math coprocessor. We were 50 times faster than the 4004 that had come out less than 20 years earlier.
I donât want to take anything away from the phenomenal run of research and development at Intel during this time but the chips and cores and amazing developments were on autopilot. The 80s also saw them invest half a billion in reinvigorating their manufacturing plants. With quality manufacturing allowing for a new era of printing chips, the 90s were just as good to Intel. I like to think of this as the Pentium decade with the first Pentium in 1993. 32-bit here we come. Revenues jumped 50 percent that year closing in on $9 billion.
Intel had been running an advertising campaign around Intel Inside. This represented a shift from the IBM PC to the Intel. The Pentium Pro came in 1995 and weâd crossed 5 million transistors in each chip. And the brand equity was rising fast. More importantly, so was revenue. 1996 saw revenues pass $20 billion. The personal computer was showing up in homes and on desks across the world and most had Intel Inside - in fact weâd gone from Intel inside to Pentium Inside.
1997 brought us the Pentium II with over 7 million transistors, the Xeon came in 1998 for servers, and 1999 Pentium III. By 2000 they introduced the first gigahertz processor at Intel and they announced the next generation after Pentium: Itanium, finally moving the world to the 64 bit processor.
As processor speeds slowed they were able to bring multi-core processors and massive parallelism out of the hallowed halls of research and to the desktop computer in 2005.
2006 saw Intel go from just Windows to the Mac. And we got 45 nanometer logic technology in 2006 using hafnium-based high-k for transistor gates represented a shift from the silicon-gated transistors of the 60s and allowed them to move to hundreds of millions of transistors packed into a single chip. i3, i5, i7, an on. The chips now have over a couple hundred million transistors per core with 8 cores on a chip potentially putting us over 1.7 or 1.8 transistors per chip.
Microsoft, IBM, Apple, and so many others went through huge growth and sales jumps then retreated dealing with how to run a company of the size they suddenly became. This led each to invest heavily into ending a lost decade effectively with R&D - like when IBM built the S/360 or Apple developed the iMac and then iPod.
Intelâs strategy had been research and development. Build amazing products and they sold. Bigger, faster, better. The focus had been on power. But mobile devices were starting to take the market by storm. And the ARM chip was more popular on those because with a reduced set of instructions they could use less power and be a bit more versatile.
Intel coined Mooreâs Law. They know that if they donât find ways to pack more and more transistors into smaller and smaller spaces then someone else will. And while they havenât been huge in the RISC-based System on a Chip space, they do continue to release new products and look for the right product-market fit. Just like they did when they went from more DRAM and SRAM to producing the types of chips that made them into a powerhouse. And on the back of a steadily rising revenue stream thatâs now over $77 billion they seem poised to be able to whether any storm. Not only on the back of R&D but also some of the best manufacturing in the industry.
Chips today are so powerful and small and contain the whole computer from the era of those Pentiums. Just as that 4004 chip contained a whole ENIAC. This gives us a nearly limitless canvas to design software. Machine learning on a SoC expands the reach of what that software can process. Technology is moving so fast in part because of the amazing work done at places like Intel, AMD, and ARM. Maybe that positronic brain that Asimov promised us isnât as far off as it seems. But then, I thought that in the 90s as well so I guess weâll see.
-
Carlota Perez is a researcher who has studied hype cycles for much of her career. Sheâs affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries.
Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Masterâs at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979.
Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries.
Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. Thereâs certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool.
Another of Gartnerâs graphical design patterns to display technology advances is what they call the âhype cycleâ. The hype cycle simplifies research from career academics like Perez into five phases.
* The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isnât even usable, but shows promise.
* The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail.
* The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment.
* The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable.
* The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided thereâs enough market, companies now find success.There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. Thereâs also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge.
ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with âRunaroundâ where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called âtemporal-difference learningâ to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term âartificial intelligenceâ when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaumâs "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a âDOCTORâ script that acted as a psychotherapist.
ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore.
Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.â That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970.
Funding for these projects shot up after the early successes and petered out ofter there wasnât much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s.
These hype cycles werenât just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called âArtificial Intelligence: A General Surveyâ and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any âmajor impactâ in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldnât cash. For example, the New York Times claimed Rosenblattâs perceptron would let the US Navy build computers that could âwalk, talk, see, write, reproduce itself, and be conscious of its existenceâ in the 1950s - a goal not likely to be achieved in the near future even seventy years later.
Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthyâs ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin.
Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karpâs âReducibility among Combinatorial Problemsâ out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like âFifth Generation Computer Systemsâ in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didnât live up to the expectations and by the early 1990s funding was cut following commercial failures.
By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBMâs Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets.
By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs.
Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI.
This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included:
* Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist.
* Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley.
* Jessica Livingston, founding partner at Y Combinator.
* Greg Brockman, an AI researcher who had worked on projects at MIT and HarvardOpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text thatâs more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people donât have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. Thatâs when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023.
ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains.
The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesnât lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesnât always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
-
Research into the history of computers sometimes leads down some interesting alleys - or wormholes even. My family would always go out to eat Chinese food, or pick it up, on New Yearâs day. None of the one Chinese restaurants in the area actually closed, so it just made sense. The Christmas leftovers were gone by then and no one really wanted to cook. My dad mentioned there were no Chinese restaurants in our area in the 1970s - so it was a relatively new entrant to the cuisine of my North Georgia town.
Whether itâs the Tech Model Railroad or hobbyists from Cambridge, stories abound of young engineers debating the merits of this programming technique or chipset or that. So much so that while reading Steven Levyâs Hackers or Tom Leanâs Electronic Dreams, I couldnât help but hop on Door Dash and order up some yummy fried rice. Then I started to wonder, why this obsession?
For one, many of these hackers didnât have a ton of money. Chinese food was quick and cheap. The restaurants were often family-owned and small. There were higher end restaurants but concepts like P.F. Changâs hadnât sprung up yet. That wouldnât come until 1993. Another reason it was cheap is that many of the proprietors of the restaurants were recent immigrants. Some were from Hunan, others from Taipei or Sichuan, Shanghai, or Peking (the Romanized name for Beijing). Chinese immigrants began to flow into the United States during the Gold Rush of California in the late 1840s and early 1850s.
The Qing Empire had been at its height at the end of the 1700s and China ruled over a third of humans in the world. Not only that - it was one of the top economies in the world. But rapid growth in population meant less farmland for everyone - less jobs to go around. Poverty spread, just as colonial powers began to pick away at parts of the empire. Britain had banned the slave trade in 1807 and Chinese laborers had been used to replace the slaves. The use of opium spread throughout the colonies and with the laborers, back into China. The Chinese tried to ban the opium trade and seized opium in Canton. The British had better ships, better guns, and when the First Opium War broke out, China was forced to give up Hong Kong to the British in 1842, which began what some historians refer to as a century of humiliation while China gave up land until they were able to modernize. Hong Kong became a British colony under Queen Victoria and the Victorian obsession with China grew. Art, silks (as with the Romans), vases, and anything the British could get their hands on flowed through Hong Kong.
Then came the Taiping Rebellion, which lasted from 1851 to 1864. A Christian was named theocrat and China was forced to wage a war internally with around 20 million people dying and scores more being displaced. The scent of an empire in decay was in the air. Set against a backdrop of more rebellions, the Chinese army was weakened to the point that during the First Sino-Japanese War in 1894, and more intervention from colonial powers. By 1900, the anti-colonial and anti-Christian Boxer Uprising saw missionaries slaughtered and foreigners expelled. Great powers of the day sent ships and troops to retrieve their peoples and soon declared war on the empire and seized Beijing. This was all expensive, led to reparations, a prohibition on importing arms, razing of forts, and more foreign powers occupying areas of China. The United States put over $10 million of its take from the Boxer Indemnity as they called it, to help support Chinese students who came to the United States.
The Qing court had lost control and by 1911 the Wuchang Uprising began and by 1912 2,000 years of Chinese dynasties was over with the Republic of China founded in 1912, and internal conflicts for power continuing until Mao Zedong and his followers finally seized power, established the Peopleâs Republic of China as a communist nation, and cleansed the country of detractors during what they called the Great Leap Forward, resulting in 45 million dead. China itself was diplomatically disconnected with the United States at the time, who had backed the government now in exile in the capital city of Taiwan, Taipei - or the Republic of China as they were called during the Civil War.
The food, though. Chinese food began to come into the United States during the Gold Rush. Cantonese merchants flowed into the sparkling bay of San Francisco, and emigrants could find jobs mining, laying railroad tracks, and in agriculture. Hard work means you get real hungry, and they cooked food like they had at home. China had a better restaurant and open market cooking industry than the US at the time (and arguably still does). Some of he Chinese who settled in San Francisco started restaurants - many better than those run by Americans. The first known restaurant owned by a Chinese proprietor was Canton Restaurant in 1849. As San Francisco grew, so grew the Chinese food industry.
Every group of immigrants faces xenophobia or racism. The use of the Chinese laborers had led to laws in England that attempted to limit their use. In some cases they were subjugated into labor. The Chinese immigrants came into the California Gold Rush and many stayed. More restaurants were opened and some catered to white people more than the Chinese. The Transcontinental Railroad was completed in 1869 and tourists began to visit San Francisco from the east. China Towns began to spring up in other major cities across the United States. Restaurants, laundries, and other even eastern pharmacies. New people bring new ways and economies go up and down. Prejudice reared its ugly head. There was an economic recession in the 1870s. There were fears that the Chinese were taking jobs, causing wages to go down, and crime. Anti-Chinese sentiment became law in the Chinese Exclusion Act in 1882, which halted immigration into the US. That would be repealed in 1943.
Conservative approaches to immigration did nothing to limit the growing appeal of Chinese food in the United States. Merchants, like those who owned Chinese restaurants, could get special visas. They could bring relatives and workers. Early Chinese restaurants had been called âchow chow housesâ and by the early 1900s there were new Chop Suey restaurants in big cities, that were affordable. Chop Suey basically means âodds and endsâ and most of the dishes were heavily westernized but still interesting and delicious. The food was fried in ways it hadnât been in China, and sweeter. Ideas from other asian nations also began to come in, like fortune cookies, initially from Japan.
Americans began to return home from World War II in the late 1940s. Many had experienced new culinary traditions in lands they visited. Initially Cantonese-inspired, more people flowed in from other parts of China like Taiwan and they brought food inspired from their native lands. Areas like New York and San Francisco got higher end restaurants. Once the Chinese Exclusion Act was repealed, plenty of immigrants fled wars and cleansing in China. Meanwhile, Americans embraced access to different types of foods - like Italian, Chinese, and fast food. Food became a part of the national identity. Further, new ways to preserve food became possible as people got freezers and canneries helped spread foods - like pasta sauce.
This was the era of the spread of Spam and other types of early processed foods. The military helped spread the practice - as did Jen Paulucci, who bought Chun King Corporation in 1947. The Great Depression proved there needed to be new ways to distribute foods. Some capitalized on that. 4,000+ Chinese restaurants in the US in the 1940s meant there were plenty of companies to buy those goods rather than make them fresh. Chop Suey, possibly created by the early Chinese migrants. A new influx of immigrants would have new opportunities to diversify the American pallate.
The 1960s saw an increase in legislation to protect human rights. Amidst the civil rights movement, the Hart-Celler Act of 1965 stopped the long-standing practice of controlling immigration effectively by color. The post-war years saw shifting borders and wars throughout the world - especially in Eastern Europe and Asia. The Marshal Plan helped rebuild the parts of Asia that werenât communist, and opened the ability for more diverse people to move to the US. Many that weâve covered went into computing and helped develop a number of aspects of computing. They didnât just come from China - they came from Russia, Poland, India, Japan, Korea, Vietnam, Thailand, and throughout. Their food came with them. This is the world the Hackers that Steven Levy described lived in. The first Chinese restaurant opened in London in 1907 and as well when people who lived in Hong Kong moved to the UK, especially after World War II.
That number of Chinese restaurants in the US grew to tens of thousands in the decades since Richard Nixon visited Beijing in 1972 to open relations back up with China. But the impact at the time was substantial, even on technologists. It wasnât just those hackers from MIT that loved their Chinese food, but those in Cambridge as well in the 1980s, who partook in a more Americanized Chinese cuisine, like âChow meinâ - which loosely translates from âfried noodlesâ and emerged in the US in the early 1900s.
Not all dishes have such simple origins to track down. Egg rolls emerged in the 1930s, a twist on the more traditional Chinese sprint roll. Ding Baozhen, a governor of the Sichuan province in the Qing Dynasty, discovered a spicy marinated chicken dish in the mid-1800s that spread quickly. He was the Palace Guardian, or Kung Pao, as the dish is still known. Zuo Zongtang, better known as General Tso, was a Qing Dynasty statesman and military commander who helped put down the Taiping Rebellion in the later half of the 1800s. Chef Peng Chang-kuei escaped communist China to Taiwan, where he developed General Tsoâs chicken and named it after the war hero. It came to New York in the 1970s. Sweet and Sour pork also got its start in the Qing era, in 18th century Cantonese cuisine and spread to the US with the Gold Rush.
Some dishes are far older. Steamed dumplings were popular from Afghanistan to Japan and go back to the Han Dynasty - possibly invented by the Chinese doctor Zhang Zhongjing in the centuries before or after the turn of the millennia. Peking duck is far older, getting its start in 1300s Ming Dynasty, or Yuan - but close to Shanghai. Otto Reichardt brought the ducks to San Francisco to be served in restaurants in 1901. Chinese diplomats helped popularize the dish in the 1940s as some of their staffs stayed in the US and the dish exploded in popularity in the 1970s - especially after Nixonâs trip to China, which included a televised meal on Tiananmen Square where he and Henry Kissinger ate the dish.
There are countless stories of Chinese-born immigrants bringing their food to the world. Some are emblematic of larger population shifts globally. Cecilia Chiang grew up in Shanghai until Japan invaded, when she and her sister fled to Chengdu, only to flee the Chinese Communists and emigrate to the US in 1959. She opened The Mandarin in 1960 in San Francisco and a second location in 1967. It was an upscale restaurant and introduced a number of new dishes to the US from China. She went on to serve everyone from John Lennon to Julia Child - and her son Philip replaced her in 1989 before starting a more mainstream chain of restaurants he called P.F. Changâs in 1993. The American dream, as it had come to be known. Plenty of other immigrants from countries around the world were met with open arms. Chemists, biologists, inventors, spies, mathematicians, doctors, physicists, and yes, computer scientists. And of course, chefs. Diversity of thought, diversity of ideas, and diversity-driven innovation can only come from diverse peoples. The hackers innovated over their Americanized versions of Chinese food - many making use of technology developed by immigrants from China, their children, or those who came from other nations. Just as those from nearly every industry did.
-
The Silk Road, or roads more appropriately, has been in use for thousands of years. Horses, jade, gold, and of course silk flowed across the trade routes. As did spices - and knowledge. The term Silk Road was coined by a German geographer named Ferdinand van Richthofen in 1870 to describe a network of routes that was somewhat formalized in the second century that some theorize date back 3000 years, given that silk has been found on Egyptian mummies from that time - or further. The use of silk itself in China in fact dates back perhaps 8,500 years.
Chinese silk has been found in Scythian graves, ancient Germanic graves, and along mountain ranges and waterways around modern India gold and silk flowed between east and west. These gave way to empires along the Carpathian Mountains or Kansu Corridor. There were Assyrian outposts in modern Iran and the Sogdia built cities around modern Samarkand in Uzbekistan, an area that has been inhabited since the 4th millennium BCE. The Sogdians developed trading networks that spanned over 1,500 miles - into ancient China. The road expanded with he Persian Royal Road from the 5th century BCE across Turkey and with the conquests of Alexander the Great in the 300s BCE, the Macedonian Empire pushed into Central Asia into modern Uzbekistan. The satrap Diodotus I claimed independence of one of those areas between the Hindu Kush, Pamirs, and Tengri Tagh mountains, which became known as the Hellenized name Bactria and called the Greco-Bactrian and then Into-Greek Kingdoms by history. Their culture also dates back thousands of years further.
The Bactrians became powerful enough to push into the Indus Valley, west along the Caspian Sea, and north to the Syr Darya river, known as the Jaxartes at the time and to the Aral Sea. They also pushed south into modern Pakistan and Afghanistan, and east to modern Kyrgyzstan. To cross the Silk Road was to cross through Bactria, and they were considered a Greek empire in the east. The Han Chinese called them Daxia in the third century BCE. They grew so wealthy from the trade that they became the target of conquest by neighboring peoples once the thirst for silk could not be unquenched in the Roman Empire. The Romans consumed so much silk that silver reserves were worn thin and they regulated how silk could be used - something some of the Muslimâs would do over the next generations.
Meanwhile, the Chinese hadnât known where their silk was destined, but had been astute enough to limit who knew how silk was produced. The Chinese general Pan Chao in the first century AD and attempted to make contact with the Romanâs only to be thwarted by Parthians, who acted as the middlemen on many a trade route. It wasnât until the Romans pushed East enough to control the Persian Gulf that an envoy was sent by Marcus Aurelius that made direct contact with China in 166 AD and from there, spread throughout the kingdom. Justinian even sent monks to bring home silkworm eggs but they were never able to reproduce silk, in part because they didnât have mulberry trees. Yet, the west had perpetrated industrial espionage on the east, a practice that would be repeated in 1712 when a Jesuit priest found how the Chinese created porcelain.
The Silk Road was a place where great fortunes could be found or lost. The Dread Pirate Roberts was a character from a movie called the Princess Bride, who had left home to make his fortune, so he could spend his life with his love, Buttercup. The Silk Road had made many a fortune, so Ross Ulbricht used that name on a site he created called the Silk Road, along with Frosty and Attoid. Heâd gotten his Bachelors at the University of Texas and Masters at Penn State University before he got the idea to start a website he called the Silk Road in 2011. Most people connected to the site via ToR and paid for items in bitcoins. After he graduated from Penn State, heâd started a couple of companies that didnât do that well. Given the success of Amazon, he and a friend started a site to sell used books, but Ulbricht realized it was more profitable to be the middle man, as the Parthians had thousands of years earlier. The new site would be Underground Brokers and later changed to The Silk Road. Cryptocurrencies allowed for anonymous transactions. He got some help from others, including two that went by the pseudonyms Smedley (later suspected to be Mike Wattier) and Variety Jones (later suspected to be Thomas Clark).
They started to facilitate transactions in 2011. Business was good almost from the beginning. Then Gawker published an article about the site and more and more attention was paid to what was sold through this new darknet portal. The United States Department of Justice and other law enforcement agencies got involved. When bitcoins traded at less than $80 each, the United States Drug Enforcement Agency (DEA) seized 11 bitcoins, but couldnât take the site down for good. It was actually an IRS investigator named Gary Alford who broke the case when he found the link between the Dread Pirate Roberts and Attoid and then a post that included Ulbrichtâs name and phone number. Ulbricht was picked up in San Francisco and 26,000 bitcoins were seized, along with another 144,000 from Ulbrichtâs personal wallets. Two federal agents were arrested when it was found they traded information about the investigation to Ulbricht. Ulbricht was also accused of murder for hire, but those charges never led to much. Ulbricht now servers a life sentence.
The Silk Road of the darknet didnât sell silk. 70% of the 10,000 things sold were drugs. There were also fake identities, child pornography, and through a second site, firearms. There were scammers. Tens of millions of dollars flowed over this new Silk Road. But the secrets werenât guarded well enough and a Silk Road 2 was created in 2013, which only lasted a year. Others come and go. Itâs kindaâ like playing whack-a-mole. The world is a big place and the reach of law enforcement agencies limited, thus the harsh sentence for Ulbricht.
-
Dassler shoes was started by Adolf Dassler in 1924 in Germany, after he came home from World War I. His brother Rudolph joined him. They made athletic shoes and developed spikes to go on the bottom of the shoes. By 1936, they convinced Jesse Owens to wear their shoes on the way to his gold medals. Some of the American troops who liked the shoes during World War II helped spread the word.
The brothers had a falling out soon after the war was over. Adolph founded Adidas while Rudolph created a rival shoe company called Puma. This was just in time for the advertising industry to convince people that if they bought athletic shoes that they would instantly be, er, athletic. The two companies became a part of an ad-driven identity that persists to this day. One most who buy the products advertised hardly understand themselves.
A national identity involves concentric circles of understanding. The larger a nation, the more concentric circles and the harder it is to nail down exactly who has what identity. Part of this is that people spend less time thinking about who they are and more time being told who they should want to be like. Woven into the message of who a person should be is a bunch of products that a person has to buy to become the ideal. Thatâs called advertising.
James White founded the first modern advertising agency called âR. F. White & Son' in Warwick Square, London in 1800. The industry evolved over the next hundred or so years as more plentiful supplies led to competition and so more of a need to advertise goods. Increasingly popular newspapers from better printing presses turned out a great place to advertise. The growth of industrialism meant there were plenty of goods and so competition between those who manufactured or trafficked those goods. The more efficient the machines of industry became, the more the advertising industry helped sell what the world might not yet know it needed. Many of those agencies settled into Madison Avenue in New York as balances of global power shifted and so by the end of World War II, Madison Avenue became a synonym for advertising. Many now-iconic brands were born in this era.
Manufacturers and distributors werenât the only ones to use advertising. People put out ads to find loves in personals and by the 1950s advertising even began to find its way into politics. Iconic politicians could be created.
Dwight D Eisenhower served as the United States president from 1953 to 1961. He oversaw the liberation of Northern Africa in World War II, before he took command to plan the invasion of Normandy on D Day. He was almost universally held as a war hero in the United States. He had not held public office but the ad men of Madison Avenue were able to craft messages that put him into the White House. Messages like âI like Ike.â
These were the early days of television and the early days of computers. A UNIVAC was able to predict that Eisenhower would defeat Adlai Stevenson in a landslide election in 1952. The country was not âMadly for Adlaiâ as his slogan went.
ENIAC had first been used in 1945. MIT Whirlwind was created in 1951, and the age of interactive computing was upon us. Not only could a computer predict who might win an election but new options in data processing allowed for more granular ways to analyze data. A young Senator named John F. Kennedy was heralded as a ânew candidate for the 1960s.â Just a few years later Stephenson had lambasted Ike for using advertising, but this new generation was willing to let computers help build a platform - just as the advertisers were starting to use computers to help them figure out the best way to market a product. It turns out that words mattered.
At the beginning of that 1960 election, many observed they couldnât tell much difference between the two candidates: Richard Nixon and John Kennedy. Kennedyâs democrats were still largely factored between those who believed in philosophies dating back to the New Deal and segregationists. Ike presided over the early days of the post-World War II new world order. This new generation, like new generations before and since, was different. They seemed to embrace the new digital era. Someone like JFK wasnât punching cards and feeding them into a computer, writing algorithms, or out surveying people to collect that data. That was done by a company that was founded in 1959 called Simulmatics. Jill Lepore called them the What If men in her book called If/Then - a great read that goes further into the politics of the day. Itâs a fascinating read. The founder of the company was a Madison Avenue ad man named Ed Greenfield. He surrounded himself with a cast of characters that included people from John Hopkins University, MIT, Yale, and IBM.
Ithiel de Sola Pool had studied Nazi and Soviet propaganda during World War II. He picked up on work from Hungarian Frigyes Karinthy and with students ran Monte Carlo simulations on peopleâs acquaintances to formulate what would later become The Small World Problem or the Six Degrees of Separation, a later inspiration for the social network of the same name and even later, for Facebook. The social sciences had become digital. Political science could then be used to get at the very issues that could separate Kennedy from Nixon.
The People Machine as one called it was a computer simulation, thus the name of the company. It would analyze voting behaviors. The previous Democratic candidate Stevenson had long-winded, complex speeches. They analyzed the electorate and found that âI Like Ikeâ resonated with more people. It had, after all, been developed by the same ad man who came up with âMelts in your mouth, not in your handsâ for M&Ms. They called the project Project Microscope. They recruited some of the best liberal minds in political science and computer science. They split the electorate into 480 groups. A big focus was how to win the African-American vote. Turns out Gallup polls didnât study that vote because Southern newspapers had blocked doing so.
Civil rights, and race relations in general wasnât unlike a few other issues. There was anti-Catholic, anti-Jew, and anti-a lot. The Republicans were the party of Lincoln and had gotten a lot of votes over the last hundred years for that. But factions within the party had shifted. Loyalties were shifting. Kennedy was a Catholic but many had cautioned he should down-play that issue. The computer predicted civil rights and anti-Catholic bigotry would help him, which became Kennedyâs platform. He stood for what was right but were they his positions or just what the nerds thought? He gained votes at the last minute. Turns out the other disenfranchised groups saw the bigotry against one group as akin to bigotry against their own; just like the computers thought they would. Kennedy became an anti-segregationist, as that would help win the Black vote in some large population centers. It was the most aggressive, or liberal, civil-rights plank the Democrats had ever taken up.
Civil rights are human rights. Catholic rights are as well. Kennedy offered the role of Vice President to Lyndon B Johnson, the Senate Majority Leader and was nominated to the Democratic candidate. Project Microscope from Simulmatics was hired in part to shore up Jewish and African-American votes. They said Kennedy should turn the fact that he was a Catholic into a strength. Use the fact he was Catholic to give up a few votes here and there in the South but pick up other votes. He also took the Simulmatics information as it came out of the IBM 704 mainframe to shore up his stance on other issues. That confidence helped him out-perform Nixon in televised debates. They used teletypes and even had the kids rooms converted into temporary data rooms. CBS predicted Nixon would win. Less than an hour later they predicted Kennedy would win. Kennedy won the popular vote by .1 percent of the country even after two recounts. The Black vote hat turned out big for Kennedy.
News leaked about the work Simulmatics had done for Kennedy. Some knew that IBM had helped Hitler track Jews as has been written about in the book IBM and the Holocaust by Edwin Black. Others still had issues with advertising in campaigns and couldnât fathom computers. Despite Stalinâs disgust for computers some compared the use of computers to Stalinistic propaganda. Yet it worked - even if in retrospect the findings were all things we could all take for granted. They werenât yet. The Kennedy campaign at first denied the âuse of an electronic brain and yet their reports live on in the Kennedy Library. A movement against the use of the computer seemed to die after Kennedy was assassinated.
Books of fiction persisted, like The 480 from Eugene Burdick, which got its title from the number of groups Simulmatics used. The company went on to experiment with every potential market their computer simulation could be used in. The most obvious was the advertising industry. But many of those companies went on to buy their own computers. They already had what many now know is the most important aspect of any data analytics project: the data. Sometimes they had decades of buying data - and could start over on more modern computers. They worked with the Times to analyze election results in 1962, to try and catch newspapers up with television. The project was a failure and newspapers leaned into more commentary and longer-term analysis to remain a relevant supplier of news in a world of real-time television. They applied their brand of statistics to help simulate the economy of Venezuela in a project called Project Camelot, which LBJ later shot down.
Their most profitable venture became working with the defense department to do research in Vietnam. They collected data, analyzed data, punched data into cards, and fed it into computers. Pool was unabashedly pro-US and itâs arguable that they saw what they wanted to see. So did the war planners in the pentagon, who followed Robert McNamara. McNamara had been one of the Quiz Kids who turned around the Ford Motor Company with a new brand of data-driven management to analyze trends in the car industry, shore up supply chains, and out-innovate the competition. He became the first president of the company who wasnât a Ford. His family had moved to the US from Ireland to flee the Great Irish Famine. Not many generations later he got an MBA from Harvard before he became a captain in the United States Army Air Forces during World War II primarily as an analyst. Henry Ford the second hired his whole group to help with the company.
As many in politics and the military learn, companies and nations are very different. They did well at first, reducing the emphasis on big nuclear first strike capabilities and developing other military capabilities. One of those was how to deal with guerrilla warfare and counterinsurgencies. That became critical in Vietnam, a war between the communist North Vietnamese and the South Vietnamese. The North was backed by North Korea, China, and the Soviet Union, the South backed by the United States, South Korea, Australia. Others got involved but those were the main parties. We can think of McNamaraâs use of computers to provide just in time provisioning of armed forces and move spending to where it could be most impactful, which slashed over $10 billion in military spending. As the Vietnam war intensified, statistically the number of troops killed by Americans vs American casualties made it look computationally like the was was being won. In hindsight we know it was not.
Under McNamara, ARPA hired Simulmatics to study the situation on the ground. They would merge computers, information warfare, psychological warfare, and social sciences. The Vietnamese that they interviewed didnât always tell them the truth. After all, maybe they were CIA agents. Many of the studies lacked true scholars as the war was unpopular back home. People who collected data werenât always skilled at the job. They spoke primarily with those they didnât get shot at as much while going to see. In general, the algorithms might have worked or might not have worked - but they had bad data. Yet Simulmatics sent reports that the operations were going well to McNamara. Many in the military would remember this as real capabilities at cyber warfare and information warfare were developed in the following decades.
Back home, Simulmatics also became increasingly tied up in things Kennedy might have arguably fought against. There were riots, civil rights protests, and Simulatics took contracts to simulate racial riots. Some felt they could riot or go die in in the jungles of Vietnam. The era of predictive policing had begun as the hope of the early 1960s turned into the apathy of the late 1960s. Martin Luther King Jr spoke out again riot prediction, yet Simulmatics pushed on. Whether their insights were effective in many of the situations, just like in Vietnam - was dubious. They helped usher in the era of Surveillance capitalism, in a way. But the arrival of computers in ad agencies meant that if they hadnât of, someone else would have.
People didnât take kindly to being poked, prodded, and analyzed intellectually. Automation took jobs, which Kennedy had addressed in rhetoric if not in action. The war was deeply unpopular as American soldiers came home from a far off land in caskets. The link between Simulmatics and academia was known. Students protested against them and claimed they were war criminals. The psychological warfare abroad, being on the wrong side of history at home with the race riots, and the disintegrating military-industrial-university complex didnât help.
There were technical issues. The technology had changed away from languages like FORTRAN. Further, the number of data points required and how they were processed required what we now call âBig Dataâ and âmachine learning.â Those technologies showed promise early but more mathematics needed to be developed to fully weaponize the surveillance everything. More code and libraries needed to be developed to crunch the large amounts of statistics. More work needed to be done to get better data and process it. The computerization of the social sciences was just beginning and while people like Pool predicted the societal impacts we could expect, people at ARPA doubted the results and the company they created could not be saved as all these factors converged to put them into bankruptcy in 1970.
Their ideas and research lived on. Pool and others published some of their findings. Books opened the minds to the good and bad of what technology could do. The Southern politicians, or Dixiecrats, fell apart. Nixon embraced a new brand of conservatism as he lost the race to be the Governor of California to Pat Brown in 1962. There were charges of voter fraud from the 1960 election. The Mansfeld Amendment restricted military funding of basic research in 1969 and went into effect in 1970. Ike had warned of the growing links between universities as the creators of weapons of war like what Simulmatics signified and the amendment helped pull back funding for such exploits. As Lepore points out in her book, mid-century liberalism was dead.
Nixon tapped into the silent majority who countered the counterculture of the 1960s. Crime rose and the conservatives became the party of law and order. He opened up relations with China, spun down the Vietnam war, negotiated with the Soviet leader Brezhnev to warm relations, and rolled back Johnsonâs attempts at what had been called The Great Society to get inflation back in check. Under him the incarceration rate in the United States exploded. His presidency ended with Watergate and under Ford, Carter, Reagan, and Bush, the personal computer became prolific and the internet, once an ARPA project began to take shape. They all used computers to find and weigh issues, thaw the Cold War, and build a new digitally-driven world order. The Clinton years saw an acceleration of the Internet and by the early 2000s companies like PayPal were on the rise. One of their founders was Peter Thiel.
Peter Thiel founded Palantir in 2003 then invested in companies like Facebook with his PayPal money. Palantir received backing from In-Q-Tel âWorld-class, cutting-edge technologies for National Securityâ. In-Q-Tel was founded in 1999 as the global technological evolution began to explode. While the governments of the world had helped build the internet, it wasnât long before they realized it gave an asymmetrical advantage to newcomers. The more widely available the internet, the more far reaching attacks could go, the more subversive economic warfare could be. Governmental agencies like the United States Central Intelligence Agency (CIA) needed more data and the long promised artificial intelligence technologies to comb through that data. Agencies then got together and launched their own venture capital fund, similar to those in the private sector - one called In-Q-Tel. Palantir has worked to develop software for the US Immigration and Customers Enforcement, or ICE, to investigate criminal activities and allegedly used data obtained from Cambridge Analytica along with Facebook data. The initial aim of the company was to take technology developed for PayPalâs fraud detection and apply it to other areas like terrorism, with help from intelligence agencies. They help fight fraud for nations and have worked with the CIA, NSA, FBI, CDC, and various branches of the United States military on various software projects. Their Gotham project is the culmination of decades of predictive policing work.
There are dozens of other companies like Palantir. Just as Poolâs work on Six Degrees of Separation, social networks made the amount of data that could be harvested all the greater. Companies use that data to sell products. Nations use that data for propaganda. Those who get elected to run nations use that data to find out what they need to say to be allowed to do so. The data is more accurate with every passing year. Few of the ideas are all that new, just better executed. The original sin mostly forgotten, we still have to struggle with the impact and ethical ramifications. Politics has always had a bit of a ruse in a rise to power. Now itâs less about personal observation and more about the observations and analyses that can be gleaned from large troves of data. The issues brought up in books like The 480 are as poignant today as they were in the 1950s.
-
Taiwan is a country about half the size of Maine with about 17 times the population of that state. Taiwan sits just over a hundred miles off the coast of mainland China. Itâs home to some 23 and a half million humans, roughly half way between Texas and Florida or a few more than live in Romania for the Europeans. Taiwan was connected to mainland China by a land bridge in the Late Pleistocene and human remains have been found dating back to 20,000 to 30,000 years ago. About half a million people on the island nation are aboriginal, or their ancestors are from there. But the population became more and more Chinese in recent centuries.
Taiwan had not been part of China during the earlier dynastic ages but had been used by dynasties in exile to attack one another and so became a part of the Chinese empire in the 1600s. Taiwan was won by Japan in the late 1800s and held by the Japanese until World War II. During that time, a civil war had raged on the mainland of China with the Republic of China eventually formed as the replacement government for the Qing dynasty following a bloody period of turf battles by warlords and then civil war.
Taiwan was in martial law from the time the pre-communist government of China retreated there during the exit of the Nationalists from mainland China in the 1940s to the late 1980. During that time, just like the exiled Han dynasty, they orchestrated war from afar. They stopped fighting, much like the Koreans, but have still never signed a peace treaty. And so large parts of the world remained in stalemate.
As the years became decades, Taiwan, or the Republic of China as they still call themselves, has always had an unsteady relationship with the Peopleâs Republic of China, or China as most in the US calls them. The Western world recognized the Republic of China and the Soviet and Chines countries recognized the mainland government. US President Richard Nixon visited mainland China in 1972 to re-open relations with the communist government there and relations slowly improved.
The early 1970s was a time when much of the world still recognized the ruling government of Taiwan as the official Chinese government and there were proxy wars the two continued to fight. The Taiwanese and Chinese still arenât besties. There are deep scars and propaganda that keep relations from being repaired.
Before World War II, the Japanese also invaded Hong Kong. During the occupation there, Morris Changâs family became displaced and moved to a few cities during his teens before he moved Boston to go to Harvard and then MIT where he did everything to get his PhD except defend his thesis. He then went to work for Sylvania Semiconductor and then Texas Instruments, finally getting his PhD from Stanford in 1964. He became a Vice President at TI and helped build an early semiconductor designer and foundry relationship when TI designed a chip and IBM manufactured it.
The Premier of Taiwan at the time, Sun Yun-suan, who played a central role in Taiwanâs transformation from an agrarian economy to a large exporter. His biggest win was when to recruit Chang to move to Taiwan and found TSCM, or Taiwan Semiconductor Manufacturing Company. Some of this might sound familiar as it mirrors stories from companies like Samsung in South Korea. In short, Japanese imperialism, democracies versus communists, then rapid economic development as a massive manufacturing powerhouse in large part due to the fact that semiconductor designers were split from semiconductor foundryâs or where chips are actually created.
In this case, a former Chinese national was recruited to return as founder and led TSMC for 31 years before he retired in 2018. Chang could see from his time with TI that more and more companies would design chips for their needs and outsource manufacturing. They worked with Texas Instruments, Intel, AMD, NXP, Marvell, MediaTek, ARM, and then the big success when they started to make the Apple chips. The company started down that path in 2011 with the A5 and A6 SoCs for iPhone and iPad on trial runs but picked up steam with the A8 and A9 through A14 and the Intel replacement for the Mac, the M1. They now sit on a half trillion US dollar market cap and are the largest in Taiwan. For perspective, their market cap only trails the GDP of the whole country by a few billion dollars.
Nvidia
TSMC is also a foundry Nvidia uses. As of the time of this writing, Nvidia is the 8th largest semiconductor company in the world. Weâve already covered Broadcom, Qualcomm, Micron, Samsung, and Intel. Nvidia is a fabless semiconductor company and so design chips that vendors like TSMC manufacture.Nvidia was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem in 1993 in Santa Clara, California (although now incorporated in Delaware). Not all who leave the country they were born in due to war or during times of war return. Huang was born in Taiwan and his family moved to the US right around the time Nixon re-established relations with mainland China. Huang then went to grad school at Stanford before he became a CPU designer at AMD and a director at LSI Logic, so had experience as a do-er, a manager, and a managerâs manager.
He was joined by Chris Malachowsky and Curtis Priem, who had designed the IBM Professional Graphics Adapter and then the GX graphics chip at Sun. because they saw this Mac and Windows and Amiga OS graphical interface, they saw the games one could play on machines, and they thought the graphics cards would be the next wave of computing. And so for a long time, Nvidia managed to avoid competition with other chip makers with a focus on graphics. That initially meant gaming and higher end video production but has expanded into much more like parallel programming and even cryptocurrency mining.
They were more concerned about the next version of the idea or chip or company and used NV in the naming convention for their files. When it came time to name the company, they looked up words that started with those letters, which of course donât exist - so instead chose invidia or Nvidia for short, as itâs latin for envy - what everyone who saw those sweet graphics the cards rendered would feel.
They raised $20 million in funding and got to work. First with SGS-Thomson Microelectronics in 1994 to manufacture what they were calling a graphical-user interface accelerator that they packaged on a single chip. They worked with Diamond Multimedia Systems to install the chips onto the boards. In 1995 they released NV1. The PCI card was sold as Diamond Edge 3D and came with a 2d/3d graphics core with quadratic texture mapping. Screaming fast and Virtual Fighter from Sega ported to the platform.
DirectX had come in 1995. So Nviia released DirectX drivers that supported Direct3D, the api that Microsoft developed to render 3d graphics. This was a time when 3d was on the rise for consoles and desktops. Nvidia timed it perfectly and reaped the rewards when they hit a million sold in the first four months for the RIVA, a 128-bit 3d processor that got used as an OEM in 1997. Then the 1998 RIVAZX with RIVATNT for multi-texture 3D processing. They also needed more manufacturing support at this point and entered into a strategic partnership with TSMC to manufacture their boards.
A lot of vendors had a good amount of success in their niches. By the late 1990s there were companies who made memory, or the survivors of the DRAM industry after ongoing price dumping issues. There were companies that made central processors like Intel. Nvidia led the charge for a new type of chip, the GPU. They invented the GPU in 1999 when they released the GeForce 256. This was the first single-chip GPU processor. This means integrated lightings, triangle setups, rendering, like the old math coprocessor but for video. Millions of polygons could be drawn on screens every second. They also released the Quadro Pro GPU for professional graphics and went public in 1999 at an IPO of $12 per share.
Nvidia used some of the funds from the IPO to scale operations, organically and inorganically. In 2000 they released the GeForce2 Go for laptops and acquired 3dfx, closing deals to get their 3d chips in devices from OEM manufacturers who made PCs and in the new Microsoft Xbox. By 2001 they hit $1 billion in revenues and released the GeForce 3 with a programmable GPU, using APIs to make their GPU a platform. They also released the nForce integrated graphics and so by 2002 hit 100 million processors out on the market.
They acquired MediaQ in 2003 and partnered with game designer Blizzard to make Warcraft. They continued their success in the console market when the GeForce platform was used in the PS 3 in 2005 and by 2006 had sold half a billion processors. They also added the CUDA architecture that year to put a general purpose GPU on the market and acquired Hybrid Graphics who develops 2D and 3D embedded software for mobile devices.
In 2008 they went beyond the consoles and PCs when Tesla used their GPUs in cars. They also acquired PortalPlayer, who supplies semiconductors and software for personal media players and launched the Tegra mobile processor to get into the exploding mobile market. More acquisitions in 2008 but a huge win when the GeForce 9400M was put into Apple MacBooks. Then more smaller chips in 2009 when the Tegra processors were used in Android devices.
They also continued to expand how GPUs were used. They showed up in Ultrasounds and in 2010 the Audi. By then they had the Tianhe-1A ready to go, which showed up in supercomputers and the Optimus. All these types of devices that could use a GPU meant they hit a billion processors sold in 2011, which is when they went dual core with the Tegra 2 mobile processor and entered into cross licensing deals with Intel.
At this point TSMC was able to pack more and more transistors into smaller and smaller places. This was a big year for larger jobs on the platform. By 2012, Nvidia got the Kepler-based GPUs out by then and their chips were used in the Titan supercomputer. They also released a virtualized GPU GRID for cloud processing.
It wasnât all about large-scale computing efforts. The Tegra-3 and GTX 600 came out in 2012 as well. Then in 2013 the Tegra 4, a quad-core mobile processor, a 4G LTE mobile processor, Nvidia Shield for portable gaming, the GTX Titan, a grid appliance. In 2014 the Tegra K1 192, a shield tablet, and Maxwell. In 2015 came the TegraX1 with deep learning with 256 cores and Titan X and Jetson TX1 for smart machines, and the Nvidia Drive for autonomous vehicles. They continued that deep learning work with an appliance in 2016 with the DGX-1. The Drive got an update in the form of PX 2 for in-vehicle AI. By then, they were a 20 year old company and working on the 11th generation of the GPU and most CPU architectures had dedicated cores for machine learning options of various types.
2017 brought the Volta, Jetson TX2, and SHIELD was ported over to the Google Assistant. 2018 brought the Turing GPU architecture, the DGX-2, AGX Xavier, Clara, 2019 brought AGX Orin for robots and autonomous or semi-autonomous piloting of various types of vehicles. They also made the Jetson Nano and Xavier, and EGX for Edge Computing. At this point there were plenty of people who used the GPUs to mine hashes for various blockchains like with cryptocurrencies and the ARM had finally given Intel a run for their money with designs from the ARM alliance showing up in everything but a Windows device (so Apple and Android). So they tried to buy ARM from SoftBank in 2020. That deal fell through eventually but would have been an $8 billion windfall for Softbank since they paid $32 billion for ARM in 2016.
We probably donât need more consolidation in the CPU sector. Standardization, yes. Some of top NVIDIA competitors include Samsung, AMD, Intel Corporation Qualcomm and even companies like Apple who make their own CPUs (but not their own GPUs as of the time of this writing). In their niche they can still make well over $15 billion a year.
The invention of the MOSFET came from immigrants Mohamed Atalla, originally from Egypt, and Dawon Kahng, originally from from Seoul, South Korea. Kahng was born in Korea in 1931 but immigrated to the US in 1955 to get his PhD at THE Ohio State University and then went to work for Bell Labs, where he and Atalla invented the MOSFET, and where Kahng retired. The MOSFET was an important step on the way to a microchip.
That microchip market with companies like Fairchild Semiconductors, Intel, IBM, Control Data, and Digital Equipment saw a lot of chip designers who maybe had their chips knocked off, either legally in a clean room or illegally outside of a clean room. Some of those ended in legal action, some didnât. But the fact that factories overseas could reproduce chips were a huge part of the movement that came next, which was that companies started to think about whether they could just design chips and let someone else make them. That was in an era of increasing labor outsourcing, so factories could build cars offshore, and the foundry movement was born - or companies that just make chips for those who design them.
As we have covered in this section and many others, many of the people who work on these kinds of projects moved to the United States from foreign lands in search of a better life. That might have been to flee Europe or Asian theaters of Cold War jackassery or might have been a civil war like in Korea or Taiwan. They had contacts and were able to work with places to outsource too and given that these happened at the same time that Hong Kong, Singapore, South Korea, and Taiwan became safe and with no violence. And so the Four Asian Tigers economies exploded, fueled by exports and a rapid period of industrialization that began in the 1960s and continues through to today with companies like TSMC, a pure play foundry, or Samsung, a mixed foundry - aided by companies like Nvidia who continue to effectively outsource their manufacturing operations to companies in the areas. At least, while itâs safe to do so.
We certainly hope the entire world becomes safe. But it currently is not. There are currently nearly a million Rohingya refugees fleeing war in Myanmar. Over 3.5 million have fled the violence in Ukraine. 6.7 million have fled Syria. 2.7 million have left Afghanistan. Over 3 million are displaced between Sudan and South Sudan. Over 900,000 have fled Somalia. Before Ukranian refugees fled to mostly Eastern European countries, they had mainly settled in Turkey, Jordan, Lebanon, Pakistan, Uganda, Germany, Iran, and Ethiopia. Very few comparably settled in the 2 largest countries in the world: China, India, or the United States.
It took decades for the children of those who moved or sent their children abroad to a better life to be able to find a better life. But we hope that history teaches us to get there faster, for the benefit of all.
-
Mark Pincus was at the forefront of mobile technology when it was just being born. He is a recovering venture capitalist who co-founded his first company with Sunil Paul in 1995. FreeLoader was at the forefront of giving people the news through push technology, just as the IETF was in the process of ratifying HTTP2. He sold that for $38 million only to watch it get destroyed. But he did invest in a startup that one of the interns founded when he gave Sean Parker $100,000 to help found Napster.
Pincus then started Support.com, which went public in 2000. Then Tribe.net, which Cisco acquired. As a former user, it was fun while it lasted. Along the way, Pincus teamed up with Reid Hoffman, former PayPal executive and founder of LinkedIn and bought the Six Degrees patent that basically covered all social networking. Along the way, he invested in Friendster, Buddy Media, Brightmail, JD.com, Facebook, Snapchat, and Twitter.
Investing in all those social media properties gave him a pretty good insight into what trends were on the way. Web 2.0 was on the rise and social networks were spreading fast. As they spread, each attempted to become a platform by opening APIs for third-party developers. This led to an opening to create a new company that could build software that sat on top of these social media companies. Meanwhile, the gaming industry was in a transition from desktop and console games to hyper-casual games that are played on mobile devices. So Pincus recruited conspirators to start yet another company and with Michael Luxton, Andrew Trader, Eric Schiermeyer, Steve Schoettler, and Justin Waldron, Zinga was born in 2007. Actually Zinga is the dog. The company Zynga was born in 2007.
Facebook was only three years old at the time, but was already at 14 million users to start 2007. Thatâs when they opened up APIs for integration with third party products through FBML, or Facebook Markup Language. They would have 100 million within a year. Given his track record selling companies and picking winners, Zynga easily raised $29 million to start what amounts to a social game studio. They make games that people access through social networks. Luxton, Schiermeyer, and Waldron created the first game, Zynga Poker in 2007. It was a simple enough Texas hold âem poker game but rose to include tens of millions of players at its height, raking in millions in revenue.
Theyâd proven the thesis. Social networks, especially Facebook, were growing.. The iPhone came out in 2007. That only hardened their resolve. They sold poker chips in 2008. Then came FarmVille.
FarmVille was launched in 2009 and an instant hit. The game went viral and had a million daily users in a week. It was originally written in flash and later ported to iPhones and other mobile platforms. Itâs now been installed over 700 million times and ran until 2020, when Flash support was dropped by Facebook. FarmVille was free-to-play and simple. It had elements of a 4x game like Civilization, but was co-op, meaning players didnât exterminate one another but instead earned points and thus rankings. In fact, players could help speed up tasks for one another.
Players began with a farm - an empty plot of land. They earned experience points by doing routine tasks. Things like growing crops, upgrading items, plowing more and more land. Players took their crops to the market and sold them for coins. Coins could also be bought. If a player didnât harvest their crops when they were mature, the crops would die. Thus, they had players coming back again and again. Push notifications helped remind people about the state of their farm. Or the news in FreeLoader-speak.
Some players became what we called dolphins, or players that spent about what they would on a usual game. Maybe $10 to $30. Others spent thousands, which we referred to as whales. They became the top game on Facebook and the top earner. They launched sequels as well, with FarmVille 2 and FarmVille 3.
They bought Challenge Games in 2010, which was founded by Andrew Busy to develop casual games a well. They bought 14 more companies. They grew to 750 employees. They opened offices in Bangalore, India and Ireland. They experimented with other platforms, like Microsoftâs MSN gaming environment and Google TV. They released CastleVille. And they went public towards the end of 2011. It was a whirlwind ride, and just really getting started. They released cute FarmVille toys.
They also released Project Z, Mafia Wars, Hanging with Friends, Adventure World, and Hidden Chronicles. And along the way they became a considerable advertising customer for Facebook, with ads showing up for Mafia Wars and Project Z constantly. Not only that, but their ads flooded other mobile ad networks, as The Sims Social and other games caught on and stole eyeballs. And players were rewarded for spamming the walls of other players, which helped to increase the viral nature of the early Facebook games.
Pincus and the team built a successful, vibrant company. They brought in Jeff Karp and launched Pioneer Trail. Then another smash hit, Words with Friends. They bought Newtoy for $53.3 million to get it, after Paul and David Bettner who wrote a game called Chess with Friends a few years earlier. But revenues dropped as the Facebook ride theyâd been on began to transition from people gaming in a web browser to mobile devices.
All this growth and the company was ready for the next phase. In 2013, Zynga hired Donald Mattrick to be the CEO and Pincus moved to the role of Chief Product Officer. The brought in Alex Garden, the General Manager for Xbox Music , Video, and Reading, who had founded the Homeward creator Relic Entertainment back in the 1990s. The new management didnât fix the decline. The old games continued to lose market share and Pincus came back to run the company as CEO and cut the staff by 18 percent. In 2015 they brought in Frank Gibeau to the board and by 2016 moved him to CEO of the company.
One challenge with the move to mobile was who got the processing payments. Microtransactions had gone through Facebook for years. They moved to Stripe in 2020. They acquired Gram Games, to get Merge Dragons! They bought Small Giant Games to get Empires & Puzzles. They bought Peak Games to get Toon Blast and Toy Blast. They picked up Rollic to get a boatload of actions and puzzle games. They bought Golf Rival by acquiring StarLark. And as of the time of this writing they have nearly 200 million players actively logging into their games.
There are a few things to take from the story of Zynga. One is that a free game doesnât put $2.8 billion in revenues on the board, which is what they made in 2021. Advertising amounts for just north of a half billion, but the rest comes from in app purchases. The next is that the transition from owner-operators is hard. Pincus and the founding team had a great vision. They executed and were rewarded by taking the company to a gangbuster IPO. The market changed and it took a couple of pivots to get there. That led to a couple of management shakeups and a transition to more of a portfolio mindset with the fleet of games they own. Another lesson is that larger development organizations donât necessarily get more done. Thatâs why Zynga has had to acquire companies to get hits since around the time that they bought Words with Friends.
Finally, when a company goes public the team gets distracted. Not only is going through an IPO expensive and the ensuing financial reporting requirements a hassle to deal with, but itâs distracting. Employees look at stock prices during the day. Higher ranking employees have to hire a team of accountants to shuffle their money around in order to take advantage of tax loopholes. Growth leads to political infighting and power grabbing. There are also regulatory requirements with how we manage our code and technology that slow down innovation. But it all makes us better run and a safer partner eventually.
All companies go through this. Those who navigate towards a steady state fastest have the best chance of surviving one more lesson: when the first movers prove a monetization thesis the ocean will get red fast. Zynga became the top mobile development company again after weathering the storm and making a few solid acquisitions. But as Bill Gates pointed out in the 1980s, gaming is a fickle business. So Zynga agreed to be acquired for $12.7 billion in 2022 by Take-Two Interactive, who now owns the Civilization, Grand Theft Auto, Borderlands, WWE, Red Dead, Max Payne, NBA 2K, PGA 2K, Bioshock, Duke Nukem, Rainbow Six: Rogue Spear, Battleship, Centipede, and the list goes on and on. Theyâve been running a portfolio for a long time. Pincus took away nearly $200 million in the deal and about $350 million in Take-Two equity. Ads and loot boxes can be big business.
Meanwhile, Pincus and Hoffman from LinkedIn work well together, apparently. They built Reinvent Capital, an investment firm that shows that venture capital has quite a high recidivism rate. They had a number of successful investments and SPACs.
Zynga was much more. They exploited Facebook to shoot up to hundreds of millions in revenue. That was revenue Facebook then decided they should have a piece of in 2011, which cut those Zynga revenues in half over time. This is an important lesson any time a huge percentage of revenue is dependent on another party who can change the game (no pun intended) at any time. Diversify.
-
In the beginning was the command line. Actually, before that were punch cards and paper tape. But at Multics and RSTS and DTSS came out, programmers and users needed a way to interface with the computer through the teletypes and other terminals that appeared in the early age of interactive computing. Those were often just a program that sat on a filesystem eventually as a daemon, listening for input on keyboards. This was one of the first things the team that built Unix needed, once they had a kernel that could compile. And from the very beginning it was independent of the operating system.
Due to the shell's independence from the underlying operating system, numerous shells have been developed during Unixâs history, albeit only a few have attained widespread use. Shells are also referred to as Command-Line Interpreters (or CLIs), processes commands a user sends from a teletype, then a terminal. This provided a simpler interface for common tasks, rather than interfacing with the underlying C programming. Over the years, a number of shells have come and gone. Some of the most basic and original commands came from Multics, but the shell as we know it today was introduced as the Thompson shell in the first versions of Unix.
Ken Thompson introduced the first Unix shell in 1971 with the Thompson Shell, the ancestor of the shell we still find in /bin/sh. The shell ran in the background and allowed for a concise syntax for redirecting the output of commands to one another. For example, pass the output to a file with > or read input from a file with
-
Lee Felsenstein went to the University of California, Berkeley in the 1960s. He worked at the tape manufacturer Ampex, where Oracle was born out of before going back to Berkeley to finish his degree. He was one of the original members of the Homebrew Computer Club, and as with so many inspired by the Altair S-100 bus, designed the Sol-20, arguably the first microcomputer that came with a built-in keyboard that could be hooked up to a television in 1976. The Apple II was introduced the following year.
Adam Osborne was another of the Homebrew Computer Club regulars who wrote An Introduction to Microcomputers and sold his publishing company to McGraw-Hill in 1979. Flush with cash, he enlisted Felsenstein to help create another computer, which became the Osborne 1. The first commercial portable computer, although given that it weighed almost 25 pounds, is more appropriate to call a luggable computer. Before Felsensten built computers, though, he worked with a few others on a community computing project they called Community Memory.
Judith Milhon was an activist in the 1960s Civil Rights movement who helped organize marches and rallies and went to jail for civil disobedience. She moved to Ohio, where she met Efrem Lipkin, and as with many in what we might think of as the counterculture now, they moved to San Francisco in 1968. St Jude, as she became called learned to program in 1967 and ended up at the Berkeley Computer Company after the work on the Berkeley timesharing projects was commercialized. There, she met Pam Hardt at Project One.
Project One was a technological community built around an alternative high school founded by Ralph Scott. They brought together a number of non-profits to train people in various skills and as one might expect in the San Francisco area counterculture they had a mix of artists, craftspeople, filmmakers, and people with deep roots in technology. So much so that it became a bit of a technological commune. They had a warehouse and did day care, engineering, film processing, documentaries, and many participated in anti-Vietnam war protests.
They had all this space and Hardt called around to find the computer. She got an SDS-940 mainframe donated by TransAmerica in 1971. Xerox had gotten out of the computing business and TransAmericaâs needs were better suited for other computers at the time. They had this idea to create a bulletin board system for the community and created a project at Project One they called Resource One. Plenty thought computers were evil at the time, given their rapid advancements during the Cold War era, and yet many also thought there was incredible promise to democratize everything.
Peter Deutsch then donated time and an operating system heâd written a few years before. She then published a request for help in the Peopleâs Computer Computer magazine and got a lot of people who just made their own things. An early precursor to maybe micro-services, where various people tinkered with data and programs. They were able to do so because of the people who could turn that SDS into a timesharing system.
St Judeâs partner Lipkin took on the software part of the project. Chris Macie wrote a program that digitized information on social services offered in the area that was maintained by Mary Janowitz, Sherry Reson, and Mya Shone. That was eventually taken over by the United Way until the 1990s.
Felsenstein helped with the hardware. They used teletype terminals to connect a video terminal and keyboard built into a wooden cabinet so real humans could access the system. The project then evolved into what was referred to as Community Memory.
Community Memory
Community Memory became the first public computerized bulletin board system established in 1973 in Berkeley, California. The first Community Memory terminal was located at Leopardâs Record in Berkeley. This was the first opportunity for people who were not studying the scientific subject to be able to use computers. It became very popular but soon was shut down by the founders because they face hurdles to replicate the equipment and languages being used. They were unable to expand the project.This allowed them to expand the timesharing system into the community and became a free online community-based resource used to share knowledge, organize, and grow. The initial stage of Community Memory from 1973 to 1975, was an experiment to see how people would react to using computers to share information.
Operating from 1973 to 1992, it went from minicomputers to microcomputers as those became more prevelant. Before Resource One and Community Memory, computers werenât necessarily used for people. They were used for business, scientific research, and military purposes. After Community Memory, Felsenstein and others in the area and around the world helped make computers personal. Commun tty Memory was one aspect of that process but there were others that unfolded in the UK, France, Germany and even the Soviet Union - although those were typically impacted by embargoes and a lack of the central governmentâs buy-in for computing in general.
After the initial work was done, many of the core instigators went in their own directions. For example, Felsenstein went on to create the SOL and pursue his other projects in personal computing. Many had families or moved out of the area after the Vietnam War ended in 1975. The economy still wasnât great, but the technical skills made them more employable.
Some of the developers and a new era of contributors regrouped and created a new non-profit in 1977. They started from scratch and developed their own software, database, and communication packages. It was very noisy so they encased it in a card box. It had a transparent plastic top so they could see what was being printed out. This program ran from 1984 to 1989. After more research, a new terminal was released in 1989 in Berkeley. By then it had evolved into a pre-web social network.
The modified keyboard had brief instructions mounted on it, which showed the steps to send a message, how to attach keywords to messages, and how to search those keywords to find messages from others.
Ultimately, the design underwent three generations, ending in a network of text-based browsers running on basic IBM PCs accessing a Unix server. It was never connected to the Internet, and closed in 1992. By then, it was large, unpowered, and uneconomical to run in an era where servers and graphical interfaces were available. A booming economy also ironically meant a shortage of funding. The job market exploded for programmers in the decade that led up to the dot com bubble and with inconsistent marketing and outreach, Community Memory shut down in 1992.
Many of the people involved with Resource One and Community memory went on to have careers in computing. St Jude helped found the cypherpunks and created Mondo 2000 magazine, a magazine dedicated to that space where computers meet culture. She also worked with Efrem Lipkin on CoDesign, and he was a CTO for many of the dot coms in the late 1990s. Chris Neustrup became a programmer for Agilent. The whole operation had been funded by various grants and donations and while there havenât been any studies on the economic impact due to how hard it is to attribute inspiration rather than direct influence, the payoff was nonetheless considerable.
-
Lars Magnus Ericsson was working for the Swedish government that made telegraph equipment in the 1870s when he started a little telegraph repair shop in 1976. That was the same year the telephone was invented. After fixing other peopleâs telegraphs and then telephones he started a company making his own telephone equipment. He started making his own equipment and by the 1890s was shipping gear to the UK. As the roaring 20s came, they sold stock to buy other companies and expanded quickly.
Early mobile devices used radios to connect mobile phones to wired phone networks and following projects like ALOHANET in the 1970s they expanded to digitize communications, allowing for sending early forms of text messages, the way people might have sent those telegraphs when old Lars was still alive and kicking. At the time, the Swedish state-owned Televerket Radio was dabbling in this space and partnered with Ericsson to take first those messages then as email became a thing, email, to people wirelessly using the 400 to 450 MHz range in Europe and 900 MHz in the US. That standard went to the OSI and became a 1G wireless packet switching network we call Mobitex.
Mike Lazaridis was born in Istanbul and moved to Canada in 1966 when he was five, attending the University of Waterloo in 1979. He dropped out of school to take a contract with General Motors to build a networked computer display in 1984. He took out a loan from his parents, got a grant from the Canadian government, and recruited another electrical engineering student, Doug Fregin from the University of Windsor, who designed the first circuit boards. to join him starting a company they called Research in Motion. Mike Barnstijn joined them and they were off to do research.
After a few years doing research projects, they managed to build up a dozen employees and a million in revenues. They became the first Mobitex provider in America and by 1991 shipped the first Mobitex device. They brought in James Balsillie as co-CEO, to handle corporate finance and business development in 1992, a partnership between co-CEOs that would prove fruitful for 20 years.
Some of those work-for-hire projects theyâd done involved reading bar codes so they started with point-of-sale, enabling mobile payments and by 1993 shipped RIMGate, a gateway for Mobitex. Then a Mobitex point-of-sale terminal and finally with the establishment of the PCMCIA standard, a PCMCIP Mobitex modem they called Freedom.
Two-way paging had already become a thing and they were ready to venture out of PoS systems. So in 1995, they took a $5 million investment to develop the RIM 900 OEM radio modem. They also developed a pager they called the Inter@ctive Pager 900 that was capable of two-way messaging the next year. Then they went public on the Toronto Stock Exchange in 1997.
The next year, they sold a licensing deal to IBM for the 900 for $10M dollars. That IBM mark of approval is always a sign that a company is ready to play in an enterprise market. And enterprises increasingly wanted to keep executives just a quick two-way page away. But everyone knew there was a technology convergence on the way. They worked with Ericsson to further the technology and over the next few years competed with SkyTel in the interactive pager market.
Enter The Blackberry
They knew there was something new coming. Just as the founders know something is coming in Quantum Computing and run a fund for that now. They hired a marketing firm called Lexicon Branding to come up with a name and after they saw the keys on the now-iconic keyboard, the marketing firm suggested BlackBerry. Theyâd done the research and development and they thought they had a product that was special. So they released the first BlackBerry 850 in Munich in 1999. But those were still using radio networks and more specifically the DataTAC network.The age of mobility was imminent, although we didnât call it that yet. Handspring and Palm each went public in 2000.
In 2000, Research In Motion brought its first cellular phone product in the BlackBerry 957, with push email and internet capability. But then came the dot com bubble. Some thought the Internet might have been a fad and in fact might disappear. But instead the world was actually ready for that mobile convergence. Part of that was developing a great operating system for the time when they released the BlackBerry OS the year before. And in 2000 the BlackBerry was named Product of the Year by InfoWorld.
The new devices took the market by storm and shattered the previous personal information manager market, with shares of that Palm company dropping by over 90% and Palm OS being setup as itâs own corporation within a couple of years. People were increasingly glued to their email. While the BlackBerry could do web browsing and faxing over the internet, it was really the integrated email access, phone, and text messaging platform that companies like General Magic had been working on as far back as the early 1990s.
The Rise of the BlackBerry
The BlackBerry was finally the breakthrough mobile product everyone had been expecting and waiting for. Enterprise-level security, integration with business email like Microsoftâs Exchange Server, a QWERTY keyboard that most had grown accustomed to, the option to use a stylus, and a simple menu made the product an instant smash success. And by instant we mean after five years of research and development and a massive financial investment.The Palm owned the PDA market. But the VII cost $599 and the BlackBerry cost $399 at the time (which was far less than the $675 Inter@ctive Pager had cost in the 1990s). The Palm also let us know when we had new messages using the emerging concept of push notifications. 2000 had seen the second version of the BlackBerry OS and their AOL Mobile Communicator had helped them spread the message that the wealthy could have access to their data any time. But by 2001 other carriers were signing on to support devices and BlackBerry was selling bigger and bigger contracts. 5,000 devices, 50,000 devices, 100,000 devices. And a company called Kasten Chase stepped in to develop a secure wireless interface to the Defense Messaging System in the US, which opened up another potential two million people in the defense industry
They expanded the service to cover more and more geographies in 2001 and revenues doubled, jumping to 164,000 subscribers by the end of the year. Thatâs when they added wireless downloads so could access all those MIME attachments in email and display them. Finally, reading PDFs on a phone with the help of GoAmerica Communications! And somehow they won a patent for the idea that a single email address could be used on both a mobile device and a desktop. I guess the patent office didnât understand why IMAP was invented by Mark Crispin at Stanford in the 80s, or why Exchange allowed multiple devices access to the same mailbox.
They kept inking contracts with other companies. AT&T added the BlackBerry in 2002 in the era of GSM. The 5810 was the first truly convergent BlackBerry that offered email and a phone in one device with seamless SMS communications. It shipped in the US and the 5820 in Europe and Cingular Wireless jumped on board in the US and Deutsche Telekom in Germany, as well as Vivendi in France, Telecom Italia in Italy, etc. The devices had inched back up to around $500 with service fees ranging from $40 to $100 plus pretty limited data plans. The Tree came out that year but while it was cool and provided a familiar interface to the legions of Palm users, it was clunky and had less options for securing communications. The NSA signed on and by the end of the year they were a truly global operation, raking in revenues of nearly $300 million.
The Buying Torndado
They added web-based application in 2003, as well as network printing. They moved to a Java-based interface and added the 6500 series, adding a walkie-talkie function. But that 6200 series at around $200 turned out to be huge. This is when they went into that thing a lot of companies do - they started suing companies like Good and Handspring for infringing on patents they probably never should have been awarded. They eventually lost the cases and paid out tens of millions of dollars in damages. More importantly they took their eyes off innovating, a common mistake in the history of computing companies.Yet there were innovations. They released Blackberry Enterprise Server in 2004 then bolted on connectors to Exchange, Lotus Domino, and allowed for interfacing with XML-based APIs in popular enterprise toolchains of the day. They also later added support for GroupWise. That was one of the last solutions that worked with symmetric key cryptography I can remember using and initially required the devices be cradled to get the necessary keys to secure communications, which then worked over Triple-DES, common at the time. One thing we never liked was that messages did end up living at Research in Motion, even if encrypted at the time. This is one aspect that future types of push communications would resolve. And Microsoft Exchangeâs ActiveSync.
By 2005 there were CVEs filed for BlackBerry Enterprise Server, racking up 17 in the six years that product shipped up to 5.0 in 2010 before becoming BES 10 and much later Blackberry Enterprise Mobility Management, a cross-platform mobile device management solution. Those BES 4 and 5 support contracts, or T-Support, could cost hundreds of dollars per incident. Microsoft had Windows Mobile clients out that integrated pretty seamlessly with Exchange. But people loved their Blackberries.
Other device manufacturers experimented with different modes of interactivity. Microsoft made APIs for pens and keyboards that flipped open. BlackBerry added a trackball in 2006, that was always kind of clunky. Nokia, Ericsson, Motorola, and others were experimenting with new ways to navigate devices, but people were used to menus and even styluses. And they seemed to prefer a look and feel that seemed like what they used for the menuing control systems on HVAC controls, video games, and even the iPod.
The Eye Of The Storm
A new paradigm was on the way. Apple's iPhone was released in 2007 and Google's Android OS in 2008. By then the BlackBerry Pearl was shipping and it was clear which devices were better.No one saw the two biggest threats coming. Apple was a consumer company. They were slow to add ActiveSync policies, which many thought would be the corporate answer to mobile management as group policies in Active Directory had become for desktops. Apple and Google were slow to take the market, as BlackBerry continued to dominate the smartphone industry well into 2010, especially once then-president Barack Obama strong-armed the NSA into allowing him to use a special version of the BlackBerry 8830 World Edition for official communiques. Other world leaders followed suit, as did the leaders of global companies that had previously been luddites when it came to constantly being online. Even Eric Schmidt, then chairman of google loved his Crackberry in 2013, 5 years after the arrival of Android.
Looking back, we can see a steady rise in iPhone sales up to the iPhone 4, released in 2010. Many still said they loved the keyboard on their BlackBerries. Organizations had built BES into their networks and had policies dating back to NIST STIGs. Research in Motion owned the enterprise and held over half the US market and a fifth of the global market. That peaked in 2011.
BlackBerry put mobility on the map. But companies like AirWatch, founded in 2003 and MobileIron, founded in 2007, had risen to take a cross-platform approach to the device management aspect of mobile devices. We call them Unified Endpoint Protection products today and companies could suddenly support BlackBerry, Windows Mobile, and iPhones from a single console. Over 50 million Blackberries were being sold a year and the stock was soaring at over $230 a share.
Today, they hold no market share and their stock performance shows it. Even though theyâve pivoted to more of a device management company, given their decades of experience working with some of the biggest and most secure companies and governments in the world.
The Fall Of The BlackBerry
The iPhone was beautiful. It had amazing graphics and a full touch screen. It was the very symbol of innovation. The rising tide of the App Store also made it a developers playground (no pun intended). It was more expensive than the Blackberry, but while they didnât cater to the enterprise, they wedged their way in there with first executives and then anyone. Initially because of ActiveSync, which had come along in 1996 mostly to support Windows Mobile, but by Exchange Server 2003 SP 2 could do almost anything Outlook could do - provided software developers like Apple could make the clients work.So by 2011, Exchange clients could automatically locate a server based on an email address (or more to the point based on DNS records for the domain) and work just as webmail, which was open in almost every IIS implementation that worked with Exchange. And Office365 was released in 2011, paving the way to move from on-prem Exchange to what we now call âthe cloud.â And Google Mail had been around for 7 years by then and people were putting it on the BlackBerry as well, blending home and office accounts on the same devices at times. In fact, Google licensed Exchange ActiveSync, or EAS in 2009 so support for Gmail was showing up on a variety of devices.
BlackBerry had everything companies wanted. But people slowly moved to that new iPhone. Or Androids when decent models of phones started shipping with the OS on them. BlackBerry stuck by that keyboard, even though it was clear that people wanted full touchscreens. The BlackBerry Bold came out in 2009. BlackBerry had not just doubled down with the keyboard instead of full touchscreen, but they tripled down on it.
They had released the Storm in 2008 and then the Storm in 2009 but they just had a different kind of customer. Albeit one that was slowly starting to retire. This is the hard thing about being in the buying tornado. Weâre so busy transacting that we canât think ahead to staying in the eye that we donât see how the world is changing outside of it.
As we saw with companies like Amdahl and Control Data, when we only focus on big customers and ignore the mass market we leave room for entrants in our industries who have more mass appeal. Since the rise of the independent software market following the IBM anti-trust cases, app developers have been a bellwether of successful platforms. And the iPhone revenue split was appealing to say the least.
Sales fell off fast. By 2012, the BlackBerry represented less than 6 percent of smartphones sold and by the start of 2013 that number dropped in half, falling to less than 1 percent in 2014. Thatâs when the White House tested replacements for the Blackberry. There was a small bump in sales when they finally released a product that had competitive specs to the iPhone, but it was shortly lived. The Crackberry craze was officially over.
BlackBerry shot into the mainstream and brought the smartphone with them. They made the devices secure and work seamlessly in corporate environments and for those who could pay money to run BES or BIS. They proved the market and then got stuck in the Innovatorâs Dilemna. They became all about features that big customers wanted and needed. And so they missed the personal part of personal computing.
Apple, as they did with the PC and then graphical user interfaces saw a successful technology and made people salivate over it. They saw how Windows had built a better sandbox for developers and built the best app delivery mechanism the world has seen to date. Google followed suit and managed to take a much larger piece of the market with more competitive pricing.
There is so much we didnât discuss, like the short-lived Playbook tablet from BlackBerry. Or the Priv. Because for the most part, they a device management solution today. The founders are long gone, investing in the next wave of technology: Quantum Computing. The new face of BlackBerry is chasing device management, following adjacencies into security and dabbling in IoT for healthcare and finance. Big ticket types of buys that include red teaming to automotive management to XDR. Maybe their future is in the convergence of post-quantum security, or maybe weâll see their $5.5B market cap get tasty enough for one of those billionaires who really, really, really wants their chicklet keyboard back. Who knows but part of the fun of this is itâs a living history.
-
Imagine a game that begins with a printout that reads:
You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower.
Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. âSearchâ doesnât work but âlookâ does. âTake waterâ works as does âDrink waterâ but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong.
The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games.
William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasnât working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers.
The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco.
Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times.
Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread.
Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods.
The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so donât need to type commands as often. The interpreter looked for commands like âmoveâ, âinteractâ with other characters, âgetâ items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s.
James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977 and itâs still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga
Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didnât hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979.
Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure.
Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. Thatâs where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.
- Laat meer zien