Afleveringen

  • Welcome to the final installment of Hardcore Software. It has been an amazing journey in the 115 or so sections including bonus posts. I owe a huge debt of gratitude to those of you that have followed along the journey of the PC and my own growth and lessons. Thank you very very much.

    I have a few more bonuses planned, including a compendium of Microspeak and a bibliography of books and magazines that I collected. For paid subscribers I will be sending out an update on how billing will end and for “True Blue” subscribers please expect an email on receiving your compiled version of the work. It’s not too late to order that and also have access to all the old posts. I will also be filling in audio for the first 70 posts in early 2023.

    Hardcore Software describes a personal journey. It is also one that happened to coincide with the PC revolution—the early days all the way through the final days of the revolution. The PC still marches on, but it is different. The PC remains essential though is no longer central to the agenda of computing as it was. That is what I mean by the end of the revolution.

    This post is free and comments are turned on for all Substack users.

    Back to 107. Click In With Surface

    Windows 8 was a failure.

    Hubris. Arrogance. Lunacy. Egomaniacal. Pick any word to describe the product; it was likely used somewhere. No one knew, or felt, the weight of the product failure more than I did.

    Nearly every successful Microsoft product had survived our it takes three versions to get it right modus operandi. Esther Dyson, a technology investor and journalist, writing for Forbes in an article “Microsoft’s spreadsheet, on its third try, excels” said “It’s something of an industry joke in the software business that it takes Microsoft three tries to get it right. There’s Windows 3.0, Word 3.0, and now Excel 3.0.” She wrote that in 1991, reviewing the third version of Excel.

    No Windows leader made it through the odd-even curse of releases, certainly not three major releases of Windows from start to finish.

    My hope had been for a credible Windows 8 knowing we weren’t finished, which was standard operating procedure for new Microsoft products. We knew where we wanted to take the product over time—the hardware, the software, and the apps. But none of that happened. For reasons I still do not fully understand, for the first time I could remember Microsoft quickly and completely withdrew and actively erased Windows 8 in an almost Orwellian way—even Clippy preserved its dignity. I try to imagine what would have happened had Microsoft given up on Windows the first time, or Windows Server, Exchange, Word, Excel, or PowerPoint. All of those took multiple iterations to find product-market fit, to win both hearts and minds.

    Requiring three versions was not a Microsoft thing. It is a product development thing. Even in a big company you must ship the first version—shipping a “V 1.0” (v for version) is always a miracle. Then you need to fix it and that was version 2.0. Then by version 3.0 not only does the product work, the sales, marketing, positioning, pricing, and more work. Product development is always a journey. Always.

    With years of hindsight including the new mobile market, the PC market, and Apple, many of the initial problems with Windows 8 were not nearly as egregious as much of the commentary made them out to be. Or maybe the commentary was right and what was egregious was not that we made a product that did what it did, but that we made Windows do those things? Or perhaps we simply did it all too soon? Or that we, surprisingly, lacked patience to get it right?

    There was commentary on me as a leader, as a person. I knew that was borne of immediate frustration and not enduring. That’s why I remained quiet and did not speak out in 2012 as I moved on to a new experience in Silicon Valley and working with entrepreneurs. I understood and even respected the emotion from where it came and the forces that produced it. Over time individuals who facilitated that commentary have since apologized directly. I was proud to be part of more than two decades of building products, processes, teams, leaders, and people—a culture—that were the highest quality, best equipped, and most talented at Microsoft in the PC era.

    The problems we needed to solve with Windows 8 and Surface were readily apparent, as I strongly believed the moment we shipped. Nothing anyone wrote about either was surprising or news to those of us who had lived with the products. The commentary on the severity of the problems, and how and what to fix, was debatable.

    Microsoft had become synonymous with the PC, but could it also reinvent the PC? That was what we set out to do. The problem was that the people who loved PCs the most weren’t interested in a new kind of PC. They wanted the PC to get better, but in the same way it had for decades—primarily, more features for tech enthusiasts and more management and control features for enterprise IT managers. They simply wanted an improved Microsoft PC from Microsoft—launching programs, managing windows, futzing with files, compatibility with everything from the past, and more like that. They wanted more Windows 7.

    Instead, they got a new era of PC, a modern PC, from other companies, and it would be called iPhone, iPad, Chromebook, or Apple Silicon Macintosh and they would be okay with that. Today in the US, Apple’s device share is off the charts relative to any past. Apple holds greater than majority share of phones. Macintosh is selling at an all-time-high 15-20% of US PCs depending on the quarter. As for the iPad, the device loathed by so many who believed thinking about tablets was the underlying strategic failure of Windows 8, Apple has perhaps 500 million active devices and sells about 160 million iPads per year, or more than half the number of PC sales. The business and personal computing market is no longer the PC market, but vastly larger, and the only position Microsoft maintains is in the part that is shrinking relative to the whole and on an absolute basis.

    The iPad is worth a special mention because of the tablet narrative that accompanied Apple releasing their product just as we started Windows 8. The iPad had a clear positioning when released—it was between a phone and a PC and great for productivity. It was an odd positioning considering it was precisely a large, but less featured, iPhone. Soon Apple would say the iPad was the embodiment of the future Apple sought. Since then, however, the iPad has been mired in a state of both confusion and poor execution. While taking advantage of the innovation in silicon and the undeniably impressive innovation in Apple’s M-series of chips, the software, tools, frameworks, and peripherals directed at the iPad have, for lack of a better word, failed. For all the unit volumes and significant use as a primary device, it has not yet taken on the role Apple articulated. I would not have predicted where they are today.

    Jean-Louis Gassée, hired by Steve Jobs and former leader of Apple hardware and later creator of BeOS, had this to say in his wonderfully reflective weekly newsletter, Monday Note:

    The iPad’s recent creeping “Mac envy”, the abandonment of intuitive intelligibility for dubious “productivity” features reminds one of the proverbial Food Fight Product Strategy: Throw everything at the wall and see what sticks.

    In competition it takes the leader to drop the ball and someone to be there to pick it up. Whether Apple truly dropped the ball with respect to the iPad or not, it is certainly clear that Microsoft in a post-Windows 8 environment was in no position to pick it up. That is a shame as I think Apple created an opportunity that might have been exploited.

    Instead in the Windows bubble, as much as anyone might have wanted new features or improved basic capabilities, they wanted compatibility with all that had come before. Legacy applications, muscle memory, and preservation of investments were the hallmarks of Windows, not to mention the ecosystem of PC makers and Intel. Why question those attributes with a new release in 2012?

    There was a comfort in what a PC was already doing and refining that while leaving paradigm shifts to other devices was, well, comfortable. Many saw the PC as both irreplaceable and without a substitute. Like the IT pros who knew the Windows registry, they were comfortable with their mastery of the product even if the world was moving on.

    Unfortunately, what is comfortable for customers is not always so comfortable for the business. Without a dominant and thriving platform, Microsoft is like a hardware company or an enterprise-only software supplier—reliant on deep customer relationships, legacy product lock-in, pricing power, and big company scale to drive the business. Those can work for a time, and perhaps even bide time hoping to invent the next platform. IBM continues to prove this every quarter, much to the surprise of technologists who today don’t even know what IBM makes or does.

    Windows 8 was not one thing, and therein lies its main challenge. Windows 8 was not simply a release of Windows, some new APIs, and a new PC. Windows 8 was a paradigm—it could not be disassembled into components and still stand. If we had just built a Microsoft PC for Windows 7 that would not have changed the trajectory of the business, just as we have since seen with hardware efforts. We already saw how touch on existing desktop software didn’t deliver. Moving from Intel to ARM without new software or worse just porting existing software was running in place at best and taking focus away from worthwhile endeavors at worst.

    To shift the paradigm and to enable Microsoft to have assets and compete in some new way required an all-or-nothing bet. Everything we know about disruption reveals how companies do what they can to avoid those situations and try to thread the needle. I was totally guilty of aiming to avoid that.

    The temptation to cling, or leverage depending on perspective, to the Windows legacy was constant. As I learned over every previous, and smaller transition, giving customers a little bit of the past—a compatibility mode or a way to opt out of changes—meant customers would take advantage it. The new product wouldn’t be new to most customers. With Microsoft’s decade-long commitment to supporting that new-old state, we would have been shackled to the very platform that was already in an anemic state—Windows and the APIs for apps, Intel and the ever-increasing transistor march, and an ecosystem of OEMs and ISVs focused elsewhere.

    There were five hot buttons I felt weakened our all-or-nothing bet. There was no quick fix for these, even in hindsight. Worse, each proved more early than incorrect. In technology, however as I have noted several times in this work, early is the same as being wrong. People will debate these even as I write them years later. I think we will always have fun doing so. I certainly will. It is the nature of engineering to forever debate the causes and decisions that led to failure.

    First, the Start screen clearly served as the emotional lightning rod for the release. It was, to some, as if we ripped the heart and soul from Windows. I didn’t see it that way, obviously. I saw a mechanism that had outlived its usefulness just as we saw menus and toolbars outlive their usefulness in Office, or even character mode decades earlier. Despite being there in the summer of 1995, I did not see the quasi-religious significance of the menu. I saw something that people had mostly stopped using—the taskbar, ALT-TAB, search, email attachments, and browser tabs replaced launching and switching programs. Tech enthusiasts, however, had many more programs, dozens of utilities, and elaborate custom configurations. I know because they sent us the screenshots in protest. I would posit today that most tech enthusiasts are using the taskbar and search to find and launch programs, exactly as we designed Windows 8 to work. I also think they would have been fine turning off their computers with the power button and would have survived not having shutdown on the screen all the time, though they would miss the chuckle from the timeless hilarity of Start -> Shutdown.

    Phone app screens grew to become the way computer programs are launched and managed. They take up the full screen and they are super easy to navigate, unlike the Start menu, which even with a mouse grew increasingly finnicky and awkward. Apple’s home screen evolved to be more like Windows 8, including search and an All Applications view. Windows ultimately evolved to become more like a phone screen and far less like the formerly beloved menu, but nothing would have appeased people at the time. Nothing, other than providing (an option, of course, everything is an option) the Start menu back for “non-touch” or “Intel” PCs or something arbitrary like that—a compatibility mode. We had compatibility mode—it was Windows 7.

    Second, the presence of the desktop in Windows RT, the version of Windows that didn’t run any existing desktop apps. Additionally, the desktop not being the first thing to pop up after starting any version of Windows proved to be “disorienting” in ways that were more reactionary than actual. From the first demo at All Things Digital when we were asked about “two modes,” I was not able to find the way to describe this feature. If a product requires explaining, it’s already lost.

    On Intel Windows, we wanted to remove a level of indirection and simply have a place to start. The desktop, with its cacophony of uses, had become a hairball. It did not roam between devices well (due to screen size and contents), and the slew of uses as a file cabinet, scratch pad, dashboard, and program launcher made improvement impossible. Alas, it was viewed as sacrosanct and another place where the world was changing but disproportionately less so for power users. The move to phones, browser apps, cloud storage, and multiple devices relegated the desktop for most. We were just too early.

    Third, on ARM we faced another challenge: Our own desire to ship meant we shipped a product that was knowingly incomplete. We simply had too many places to return to the legacy experience. Apple chose to hide those capabilities on the iPad until they were ready and perhaps that would have been a better approach for us too. We should recall that the iPhone shipped without copy and paste, and the iPad was literally a big iPhone with fewer built-in apps and no phone capability. We wanted to embrace the capability of the operating system without compromising quality, security, and so on. For example, the iPad had no files or file management capabilities, including using devices such as USB thumb drives, until 2017. We supported those out of the gate, but that sometimes required using the legacy desktop and explorer. We didn’t have time to build the WinRT file manager app we had already designed, as an example, or update everything in the Windows control panel. The vast ecosystem took time to move, but we had to begin a transition.

    Many believed omissions presented a weird modality or duality. Our explanation that this was always how Windows evolved failed to satisfy or reduce perceived confusion. Windows carried the DOS box and command line (and still does) and even .INI files, for those who cared, for decades—and it was widely used by tech enthusiasts and administrators alike—even for copying files! Having some of the old along with the new had always been the Microsoft way. It just seemed so inelegant compared to Apple.

    I, we, did not love that we had to do this. While we considered many alternatives, we didn’t have a better approach without taking forever to ship. The enemy of the good is the perfect. The industry was moving away from Windows, and we had to get in the game. That is decidedly different than Apple releasing a phone to a market that was far from settled and expected little from Apple which had nothing to lose. We had everything to lose and were losing everything.

    Fourth, there was one very significant reason we required the desktop and why it was not simply a bonus like the old command or DOS box.

    Office.

    The shift to touch, phones and tablets, and a modern operating system with new apps was much larger than any one of those individually. To decompose the strategy into components is to wield the technology buzzsaw or fall victim to taking on disruptive technology shifts like entrenched incumbents tend to when failing. To have a viable product, we needed to execute on all, simultaneously.

    These shifts happened all at once not because the PC did not take on each feature individually, but because the PC model itself could not possibly address the shortcomings that built up over years. Loosely aggregated features, engineered across adversarial partners, combined into a product was well-suited to the invention of the personal computer, but not the optimization of the experience. The idea that the PC could move forward into a new era by simply adding touch, or some new user interface features on top of what was there or adding an app store while still supporting downloading code from the internet, or even adopting ARM, each as a point solution was simply the old way of solving problems, the PC way.

    Surface RT was designed to be the epitome of productivity for mobile information work. We wanted Surface to be people’s primary work computer, the way they used a laptop (PC or Mac), but with all the reliability, security, battery life, and mobility of an iPad. To accomplish that we needed Office: Word, Excel, PowerPoint, and (some would say) Outlook. The problem: I was not successful at evangelizing the new platform opportunity to the Office team. Let that sink in.

    The hallmark of Microsoft’s strategy had always been apps and platform, platform and apps, and if necessary, by force. Early on the apps were forced to work on Windows against their judgment, and then later Windows was forced to be good for the apps even when it thought Office was not the focus, and this see-saw continued for the benefit of the industry. It was like we had reached a point where both businesses were so entrenched that their own worldviews precluded a bigger, all Microsoft bet. Perhaps we reached a level of fatigue in cross-company bets after the Longhorn and .NET eras.

    When we presented the Windows 8 vision years before, I sat next to SteveB and friends from the Office team that were invited. It was during that meeting that we began to make the case that the new runtime existed to build new apps—new ways of working, focused on collaboration, sharing, tasks, cloud storage, and less on massive amounts of formatting, scattered document files, and so on. The collective questions were more about how to maintain compatibility for Visual Basic for Applications, third-party add-ins, and the Ribbon. We never bridged that gap. The whole product cycle, Office wanted to port existing Office to ARM and run it. The biggest irony was that the entire time the Mac Office team was busy building Office for the iPad, which is why it was ready to release on the heels of Windows 8. There are two sides to this story. The Windows 8 leaders knew both sides because we were previously the leaders of Office. There was a view that even that cross-pollination contributed to the challenges. Ultimately, there was a need for both the old Office and a new set of tools. Silicon Valley was hard at work creating new tools for information workers—modern, mobile, cloud, web, SaaS tools for productivity—while Microsoft continued to pay the bills with existing apps, even if they sold under the 365 cloud moniker for much higher prices.

    Office was part of a symptom, a fatal symptom, of having no WinRT apps, which was obvious. Everyone in the industry knows that if Microsoft is serious about something then Office will participate. If Microsoft was not serious, then there was no Office support. Desktop Office sent the clear message that Microsoft was not serious about WinRT. It didn’t matter what we said as our actions spoke louder than words. We had OneNote and some experiments. But without Excel, there was no WinRT. We had many debates in executive staff meetings about juicing the ISV ecosystem. That whole ecosystem was focused on the browser while still suffering from the strategy fatigue from Windows Phone 7 (and soon 8) and the failed Longhorn strategies. We needed Office. We got OneNote. I still love OneNote to this day.

    We chose not to build a phone first or simultaneously. Perhaps the entire platform strategy would have been different if we had indeed executed on the mythical plan of having a phone and first-party hardware out of the gate. Since that would not have finished perhaps until 2015, debating it at all might be irrelevant. By 2015 the world would be solidified around iOS and Android leaving little headroom for us, plus we would have had a couple of more years of failure of Windows Phone, services, and an aging Windows 7. Fans of Windows Phone will continue to debate and champion that legacy the way people fondly remember the Amiga or Newton, so this is one debate I intentionally avoid.

    And fifth, Windows RT and even Windows 8 were the product names. Many argued that any product without a traditional Start menu and full compatibility could not be named Windows. More would argue that naming Windows RT as we did, “confused customers” as to what would run where. That was true, but Windows had long had variants that came with limitations or exceptions: Windows NT, Windows CE, Windows Embedded, Windows Media Center, even Windows 2000, and most recently Windows Phone. All of these were confusing in some relative sense, but all were given time to rationalize the confusion.

    Should we have abandoned the legacy that was embodied in the Windows name? I certainly thought about that a lot. Here again, everything about disruption would have said absolutely name it something different. Heck, put the team in another building, get them new card keys, and so on like the Zune team did. That same theory also showed how no one ever does that because they look at the cost and effort, along with splitting all the energies of the company, and reject any such approach as heresy. As someone who grew up knowing that Windows NT or even Windows XP didn’t really run everything Windows 95 ran, I felt like we could pull it off. I was wrong. I don’t think a new name would have worked any better and would wager that any new name would have been cutesy and clumsy at the same time given Microsoft’s history. Then again, many chuckled at iPad too.

    There were a host of smaller reasons as well, some related to timing and others to just trying to get a product to market. The original Surface did not support LTE, which undermined our mobility and mobile chipset message. We chose a unique aspect ratio for the screen, which proved to just be wrong for productivity. SteveB pushed on this as he personally began to like eReaders on Android that had paperback book aspect ratios. We all agreed, but it was too late. The original Surface RT keyboard, the Touch Cover, was innovative but not productive. There were other software aspects as well, like my refusal include Outlook, which would reduce battery life by 1-2 hours simply because it wasn’t a modern app and had incomplete connections to our services backend. The lack of support for traditional group policy irked IT Pros who felt they needed to secure any device using invasive methods used on PCs. Those methods were battery draining while opening up a host of Win32 compatibility requirements. Modern mobile device management tools are far better than what IT did at the time, and we built those capabilities into the first release.

    I tend to look at the failures of the product strategy because that is where most of the focus sits. But I had failures in the go-to market as well. I am rather fond of pointing out that success and failure are elements of the 4 Ps of product, price, place, and promotion. In the case of Surface RT, we got the price right assuming we sold a lot of them, and surprised some people even in the face of super cheap Android tablets. We got the place, the distribution strategy, entirely wrong. The Microsoft Store retail team was insanely excited to have exclusive access to Surface, but there weren’t enough stores to sell the number we needed or wanted to sell. I was unable to make the math clear to leadership. We literally could not sell all we made even if the stores operated at multiples of capacity, 7 x 24. I begged and pleaded to expand distribution, and so did our partners, right up until the end. In the process we angered our Windows retail partners around the world. Then we had a whole bunch left over, and in 2013 Microsoft took an inventory write-off to complete the erasure of Windows 8 from memory. I remain convinced, perhaps naïvely so, we could have moved many more units had we opened the retail strategy. That would have allowed for the potential to iterate to a second and ultimately third version.

    No matter what happens, someone always said it would, and when a product fails those predicting that are quick to make themselves known. I admit I found it enormously frustrating to see the revisionist views appear, whether with the press or with people at Microsoft. President John F. Kennedy famously said, “Victory has a thousand fathers, but defeat is an orphan.” I certainly felt alone in defeat. It is well-known that as a leader, a huge part of the job is to stand alone and absorb the defeat. I think I did that at least as well as making sure any successes achieved over decades were those of the team or specific individuals and definitely not mine alone.

    While our issues are quite clear in hindsight, they were also visible at the time which only serves to further the told-you-so reactions. Every technology shift had doubters focusing on some specific issue, or challenge, rather than seeing a complete package like all those people who said the Macintosh was not a real computer because of the mouse, or how the iPhone touch screen would not work. Had the iPhone failed, it is easy to imagine the self-congratulations from those who noted in their reviews the lack of copy/paste, missing Adobe Flash support, incomplete push email, or the new on-screen keyboard.

    I knew the feedback on each of the specifics, but also, and naïvely, hoped the whole system could be seen for what it was trying to do…and it would do if given a time and effort. The story of the ribbon in Office, one of the biggest and most successful UI reinventions undertaken in a product, was one filled with doubters inside Microsoft and outside as we discussed. Today some of those who were the most negative about the change even find themselves leaders in today’s Office.

    But why should people have been patient? They had the iPhone, iPad, Android phones and tablets, and Chromebooks all to choose from. They were happy to use those and just stick to Windows for all the familiar legacy work.

    As we progressed to the October 2012 launch of Windows 8, a lot was happening that perhaps marred our ability to see clearly and, more importantly, collectively. It was an anxious time. We spent the summer in leadership team meetings gearing up for the yearly sales meeting ahead of a big launch year. The past ten years of flat stock price and lackluster product success had taken its toll.

    The difficulties with Windows Phone were compounding, despite poorly grounded optimism. Phone apps were not gaining traction and worse the hacks to gain apparent traction were backfiring, such as paying developers to write apps or adding apps to the store that were simply web sites. Fingers were pointing. With Surface, the question became: Should we build our own phone? Should we kick off a skunkworks project like Zune (that team was available) to build a phone? Would that truly address what we were seeing in the market? The phone was the next platform, a platform for the entire planet, and we were not even registering as a viable third place. The opportunity to align Windows and Windows Phone long passed and there was still no appetite to weigh down the phone strategy with a full Windows-centric approach. It was always as if the phone was on firm footing, and it just needed to borrow a bit more code from Windows. It was never on a path to success.

    The Windows Server team seemed to resist embracing the new Azure product, which remained organizationally distinct. In hindsight this proved reasonable as Azure came to be dominated not by Windows as hoped but by Linux. We were obligated to support them, so that was creating a lot of extra work for us, delivering for both the on-premises server and the new data center product. The enterprise server and server apps data center era, as we knew it, was over. It was archaic and unmanageable. Microsoft was barely able to keep its own Windows data center running. The future of the server business was the cloud, but our enterprise customers were strongly resisting, or more likely entirely unaware, of that fact. AWS was still almost entirely a Silicon Valley novelty. This made it easier to ignore.

    Our commitment to online services, what I saw as a key component of the Windows platform as we knew it, from communications to identity to storage, were all seen as cost centers despite the obvious need to own and build widely used cloud services across devices. A complete device experience required services extended to the cloud, built natively for the cloud rather than ports of existing Windows and Office servers. The device experience as we knew it to be was coming to an end without services.

    Microsoft’s transformation to an internet company—well documented as one of the most successful pivots of all time—turned out to be great for our revenue but lousy for our platform technology. The level of internet savvy across the company paled in comparison to Google and Facebook (yet the much-sought-after Yahoo rapidly faded). Little, if any, Microsoft technology or platform were part of the rise of these huge new powers. The only organic internet scale business was Bing, which continued to struggle, though ironically has today found a niche disguised as the anti-Google DuckDuckGo.

    Across Office, Windows, and Servers, the reality was our pricing power, distribution moat, and enterprise account relationships combined to provide comfort in the face of shifting platforms. The reality of business disruption was right in front of us. It was plain to see. Yet the business was apparently thriving, just as the classic book on disruption would predict.

    Despite the massive revenue success and great numbers, the technology story looked bleak to me. I felt alone in that concern. I sent a memo to the leadership. It was a plea to develop a point of view that these technology shifts implied that went beyond defensive—Without a Point of View, There Is No Point. I tried to argue that we had been lulled into a sense of complacency by having succeeded through the dot-com bubble yet ended up weak in every technology shift that followed. We were losing everywhere, except revenue. We did not have a platform, which as a platform company was a big deal. My memo, which turned out to be the last one I wrote, was decidedly a polemic meant to stir people up.

    I also wondered whether it even made sense to keep score, so to speak, with products and technologies. Maybe all that mattered was leading by business metrics like revenue, profit, and cash flow. There are worse things than being a huge money-making corporation.

    Leading Microsoft to reimagine Windows had been accomplishment enough for me. I knew what I had signed up for more than five years earlier. It was much more than even I had planned on. But it was time. Ultimately, I could not break the odd-even curse.

    After the October launch in New York, it was time to mark the end of my career with Microsoft. With great clarity, I knew it was time to use that resignation letter I’d been holding on to. One last meeting with SteveB, and some quick rounds across my direct reports, and then I hit send. Our split was undeniably mutual. I felt a sense of relief. Compared to leaving the Office team for Windows, I was much more certain this was the right decision. I was eager for a context switch.

    Once the decision was made, moving on immediately was best. I never wanted a big send-off. Quite the contrary. I had gone through two years or eight depending on how one counts of Bill “transitioning” (not that there is a comparison). Then there was the year transitioning through Windows Vista I experienced. I’d seen too many people stick around for too long, confusing the team and dragging their departure out.

    What I failed to understand, and deeply regret, was that in our collective haste to move on I left behind the most superb team, ready to take over and move things to the next level, but not prepared for what would actually follow. I was selfish and the people who gave the most to create the new Windows team paid for that.

    The “post-Sinofsky” unwinding of Windows 8, not just the product, but the organization and processes as well, became even more painful than market reception of Windows 8. I let the team and my best friends down. I left them to pick up the pieces I should have been picking up myself. The management above them failed to show them the respect they deserved. Nearly every one of them represented in these half-million words—the best leaders Microsoft created across every discipline—has left the company. So many of them are creators, entrepreneurs, and thought leaders in ways that Microsoft does not even participate today.

    BillG had always been fond of saying that tech companies have a remarkably difficult time leading as technology crossed generations. Leaders of each era might maintain scale, rarely evaporating, but losing influence and presence in the market. Technology analyst and friend Benedict Evans wrote:

    Every day of Windows 8 development, I worried about losing our audience of developers who hung on every word of not just what new features for Windows and Office were being shipped, but what code they could write, what business they could build, what customer problem they could solve by making a singular bet on Microsoft’s inventions to power them. I’d seen the relevancy of Office fade even while the business grew. Many companies have core fan bases, but Microsoft was blessed with people and companies building businesses on our technology, uniquely and singularly, and many of them moved on. Developers are not the end, but a means to a force-multiplying economic relationship with billions of people. I’d seen how easy it is in a large organization to find a bubble to hide out in—a bubble where our products are as popular and relevant as they once were.

    Paul Graham, fellow 1980s Cornellian, founder of the Y Combinator startup accelerator, and described as a “hacker philosopher” by author Steven Levy, wrote two essays on Microsoft that were rather prescient, if not spooky. They were written so early as to be easily dismissed or perhaps wishful thinking. In 2001, he wrote “The Other Road Ahead” a nod to the book by Bill Gates. In this essay he described the new Web 2.0 paradigm relative to the desktop computer. He aptly described the bubble Microsoft was in, that I was in, and we just didn’t know it:

    Back when desktop computers arrived, IBM was the giant that everyone was afraid of. It's hard to imagine now, but I remember the feeling very well. Now the frightening giant is Microsoft, and I don't think they are as blind to the threat facing them as IBM was. After all, Microsoft deliberately built their business in IBM's blind spot.

    I mentioned earlier that my mother doesn't really need a desktop computer. Most users probably don't. That's a problem for Microsoft, and they know it. If applications run on remote servers, no one needs Windows. What will Microsoft do? Will they be able to use their control of the desktop to prevent, or constrain, this new generation of software?

    The essay is long and well worth reading for its predictive powers. While not every technical point landed precisely as described, especially the iPhone and apps, it is a brutal read that was aptly ignored by us at the time. As if to double-down, in 2007 he wrote another essay “Microsoft is Dead” wherein he described what took place in the mid-2000s that killed Microsoft. This was just after Windows Vista when it was easy and rather popular to assert an end. That was because of Vista, however, not because of the four factors in his essay: Google, the desktop is “over”, broadband internet, and Apple OS X on new Macs. He concluded with a candid and brutal take:

    Microsoft's biggest weakness is that they still don't realize how much they suck. They still think they can write software in house. Maybe they can, by the standards of the desktop world. But that world ended a few years ago.

    I already know what the reaction to this essay will be. Half the readers will say that Microsoft is still an enormously profitable company, and that I should be more careful about drawing conclusions based on what a few people think in our insular little "Web 2.0" bubble. The other half, the younger half, will complain that this is old news.

    Despite Graham’s dire predictions, people continue to rely on Microsoft and bet on the company today. Microsoft is an enormous business. It is undeniably a different kind of leader, more a comfortable companion than a guide taking you to new places.

    It is popular to say that the operating system is no longer key, that the definition of a platform has changed, and that the battle is now above the OS. That’s a great rationalization, and as Jeff Goldblum’s character in The Big Chill said, “Don’t knock rationalization. Where would we be without it? I don’t know anyone who can go a day without two or three rationalizations.” The biggest business at Microsoft remains Microsoft/Office 365, and as valuable as email and video conferencing are (and as important as transferring the capital and operational costs to Microsoft in exchange for a huge price increase was too), the unique intellectual property in Office 365 that defies competition is Word, Excel, and PowerPoint. These remain key tools used in daily mission critical work by perhaps 300 million information workers. Those people, at least 90 to 95 percent of them, rely on Windows to do that work, which is Microsoft’s second largest business.

    Bundling the new Microsoft Teams with the 365 service perfectly reflects this legacy approach, a well-worn strategy to support the existing business and pricing rather than risk building a new business and creating a new revenue stream. In this work I described the bundling of Outlook, OneNote, SharePoint in this same regard during the expansive period of enterprise software. Microsoft loved a good bundle. Customers did to. I suspect this time with Teams will end differently primarily because the computing world is in a different place and the requirement that customers settle for a bundled product is nowhere the same as it was in the early or middle stages of the PC era. Certainly, the macroeconomics of 2022 will provide a foundation for a bundle, but that has not proven sustainable.

    The virtuous cycle of apps and systems envisioned by BillG in the 1980s stands as the hallmark of Microsoft, but it is not growing in users and is not part of a thriving ecosystem. New customers aren’t coming to PCs with Office even at a rate to replace those that retire. Windows PCs are not benefitting from advances in a hardware ecosystem that transitioned to mobile. Developers aren’t building new software for Windows. The cycles have been broken.

    PC sales in 2022 will likely have declined 15-20% over 2021. Many will point to the economy or foreign exchange rates or length of a replacement cycle, or maybe even Apple’s inability to figure out its own product strategy. At some point the only conclusion is that the PC, like the mainframe, minicomputer, and workstation before it, was a mature and saturated market. There will be good years and not so good years, but the PC or PC server running Windows will never be the platform it was. Even if people develop new products on it, those products will not take advantage of what a new PC has to offer, which is the definition of a platform.

    The web and smartphones have assumed the role of the information platform for the world. This time is different, however. Together these two have touched every human on the planet. That hasn’t happened with any other foundational platform, to use the term broadly, in history except for perhaps fire. All the amazing, connected platforms from the 20th century reached their limits long before reaching every person everywhere, including rail, car, roads, airplanes, telegraph, telephone, television, electric grids, sewage, and more. Whatever comes along next will also be the greatest technology displacement in the history of humans as it will replace the internet-connected smartphone for every person on earth. That is going to be an incredible innovation.

    Craig McCaw, one of the earliest pioneers of mobile phones in the US via his namesake company and graduate of Seattle’s Lakeside School a few years before Bill Gates, described humans as inherently nomadic when it comes to the technology they use. He first made this observation with respect to mobile phones compared to land lines that tethered users to a wall. Technology is broadly adopted first when it meets their needs, and not before. Then over time when a product fails to meet their needs and something better comes along, they simply move on. Smartphones came along and when they became good enough, people simply stopped demanding new PCs.

    BillG once said to me of another legendary company facing challenges that things always seemed brightest before a precipitous decline. That’s what Windows PCs were facing. Windows 7 was peak PC the way Office 2007 was peak Office. Peak, not in terms of financial success, but peak in terms of influence, centrality, excitement, and general interest. Microsoft was in the midst of disruption, to use that overused phrase on more time. Not everyone agreed about where we were in the arc, and that was the problem.

    Either way, it was the end of the PC era.

    We knew that because we could read, watch, or hear about it all by swiping with a finger or by asking about it on that modern computer in our pockets or one of the hundreds of millions of tablets on sofas, in the back seats of cars, or in airplane seats. That computer was the reinvention of the PC. In no shortage of irony, the company that brought us this new computer was Apple—the company that for all practical purposes invented the PC in the first place.

    Windows PCs are never going to reach 500 or more million or more units a year as analysts had once predicted. The question was not how high sales would go but how low could they fall—peak PCs ended up being the Windows 7 upgrade cycle that resulted from a combination of poor demand due to Windows Vista, delays due to the recession, and lengthening PC lifetimes. A market of 300 million PCs is a fantastic business (about the same as 2007), but it isn’t two billion iOS devices in use or the more than three billion active Android devices. It certainly isn’t the growth opportunity of bringing PCs to every home and desk across emerging markets in Asia and Africa like we had hoped a year or two earlier.

    Did Windows 8 cause PC sales to decline because it was the wrong product for the PC? Or did the substitution of phones for PCs in the eyes of customers cause the decline in PC sales which made Windows 8 the wrong product? Years later all I can say with certainty is that those both happened at the same time, each correlated with the other. Importantly, would a more incremental and traditional release of Windows have really saved PC sales or even have spurred significant growth? Does anyone really think this today? I didn’t then and certainly do not today.

    Looking back, there are small things that I might have changed, though I can say with confidence that would not have altered the outcome. To date, no one has offered up a plan that would have had a better shot at helping Microsoft achieve the same level of impact and influence in this next era of computing. It makes for a fun column or Gedankenexperiment to explore the counterfactual, but I didn’t really have a way to do that in real time. There’s no A/B test in building Windows.

    Microsoft was going to be big for decades the same way IBM had remained significant, but it deserved its chance to earn a spot as influential and central a platform to computing as it had been originally. The most obvious answer is also the most difficult for me to accept, which is maybe there just wasn’t a plan. Maybe, unfortunately, where the world settled was inevitable. People moved on.

    It is easy to poke holes in this sort of logic. We were not caught off guard or in denial. This was not New Coke. We did not see that the youth market liked a slightly different PC, so we tried to tweak the PC to make it more stylish and appealing. Our strategy was similar to what Apple had done, though our approach brought our unique perspective. There was a clear case of parallel evolution, but Apple came to market first (again!) and there were many reasons to believe being second would have advantages, as had been the case for the PC, Windows, Word, Excel, Windows NT, Exchange, and so many other Microsoft success stories. That was not the case.

    When I look at the Tablet PC, Media Center, Windows Phone, or a host of other products, one thing is clear. Those products did not have the right substrate relative to how the market would evolve. There was nothing in those implementations that represented a first step in a progression to the products that did eventually dominate. It is fair to say the concepts were right, but the technical foundation was wrong because elements were simply not ready. Windows 8, in my biased view, had the right ingredients that were also market-ready.

    There is one exception to Windows 8 having the right ingredients, and I don’t know if the market gives Apple enough credit for what they have done. The Apple Silicon work is, at least for now, unparalleled. There was no likely outcome over the past decade that I think would have left Microsoft and likely NVIDIA in the position to compete with Apple Silicon, though a stronger Microsoft might also have influenced how Apple evolved. Apple Silicon was really in Apple’s DNA from the earliest days and was almost an inevitable outcome of the failure of Motorola, PowerPC, and then Intel to contribute to what Apple saw as its destiny. I look at that work and wonder how we might have responded because we most certainly would not have gone down that path. It was one thing to assemble the components and PC and bring it to market. It would have been quite another to have the audacity to create a chipset.

    Since leaving Microsoft at the end of 2012, I’ve spent many hours discussing and debating the specifics of Windows 8 with friends, former coworkers, current employees, start-up founders, reporters, and, well, people who are just curious. Most of the time is spent discussing platforms, ecosystems, and tectonic shifts that one faces after unimaginable success, at least when we’re not just tackling my favorite topic and the bread and butter of Silicon Valley, which is growing and scaling innovative companies and teams. There remains a desire, especially among the reenergized Microsoft community, for more about specifics on the perceived misdirected product choices. In many ways that discussion is both easy and impossible. It is easy because there are only a few things to point to. It is impossible because if the situation could have been so easily addressed by changing something that was so readily identified, then that would have made for a quick fix.

    Maybe wanting so much more for Microsoft’s products was the bigger mistake, more than changing the Start menu or breaking with legacy compatibility. That apocryphal Henry Ford adage about cars and horses was meant to point out that people express their needs relative to what they understand the possible solutions to be. Windows customers wanted Windows, a faster Windows, but they didn’t want something different…until they did.

    There’s an old USENET meme used to make fun of seemingly trivial problems by claiming something is the “hardest problem in computer science.” It might very well be that the actual hardest problem in computer science is a company reinventing its own successful computing product for a new era in a way that existing customers are willing and excited to use. The irony is that those same customers always seem willing, and excited, to jump to the next era of computing using products from another company. Maybe that is a net positive and it is nature’s way of restoring, even rebirthing, our technology foundation.

    The impact of Windows 8 and the way Microsoft chose to move on so quickly did have one important and chilling effect. Not only did so many people leave or were made to leave, but the culture that replaced them became risk averse and one that rewarded not failing more than taking risk. There was a fear of repeating a Windows 8, whatever that might have meant. For all the focus on growth and the ever-rising stock price, that meant growth of revenue more than new customers, scenarios, or business approaches. Soliciting feedback, voting on features, listening to customers, or creating products for all platforms are not substitutes for a point of view or strategic moat, as our industry has shown time and again—as counterintuitive as that might be.

    A decade has passed since Windows 8 and product releases from the company have done little to avoid the IBM fate, a fear of which BillG firmly implanted in my psyche.

    Half will read that and mock me for suggesting any of it and that explains all that’s needed to understand the failure of Windows 8. The other half will read that wondering who IBM was and when exactly they mattered. I started my career when IBM cast a shadow over every aspect of the technology industry and was fortunate to experience when Microsoft rose to that position. Bill instilled this very fear into all of us even before the internet or the split with IBM.

    Ibn Khaldun, an Arabic philosopher during the Middle Ages, wrote that in war “the vanquished always seek to imitate their victors.” In business it is often the other way around. The victors end up becoming what they vanquished, as victors inherit the same problems on a new technology and product base. It might take years, but creative destruction is just as certain to take place. My Harvard Business School friend and previous co-author Marco Iansiti once joked to me that the school considered rebranding disruption theory as physics or math, not simply theory.

    Despite how the product landed and then crashed, Windows 8 was the most committed, skilled, and brilliant Windows team ever assembled and it had one mission: to give the PC a new direction for a new world. Ultimately, under my leadership and management we failed to build a product that would change the trajectory of Windows or PCs. We failed, however, with elegance, grace, and an amazing group of people doing their most memorable and rewarding work. Unlike many failed efforts at Microsoft, we had remarkable execution. Like many failed efforts at Microsoft, we had the right ideas but the wrong timing. The Windows market wasn’t ready, but when it was ready, Microsoft was too late. Windows 8, therefore, was both too early and too late.

    Windows 8 was the last product to be made by Microsoft’s definition of hardcore software. Some will say that is a good thing. It is neither good nor bad, but simply is. It is how eras evolve and the torch passes from one generation to another.

    I was attracted to Microsoft by the level of product and technology acumen of every single person I spoke with all the way up the chain to the founder and CEO, the office with a door, the free drinks, and the Pacific Northwest. What really got me was what I saw as the true nature of hardcore software.

    To be hardcore is to be wildly optimistic about what can be achieved tomorrow while harshly pessimistic about what works today. Creating software is an art. It is computer science and engineering. It is inspiration, and perspiration. It is inherently individual yet relies on a team. Most of all, building software is a group of people coming together to conjure something into existence and turning that into a product used by billions.

    And that is Hardcore Software.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Happy Holiday to those in the US. This is a special double issue covering the creation and launch of Microsoft Surface, an integral part of the reimagining of Windows from the chipset to the experience. To celebrate such a radical departure from Microsoft’s historic Windows and software-only strategy this post is unlocked, so please enjoy, and feel free to share. I’ve also included a good many artifacts including the plans for what would happen after Windows 8 released that were put in place. The post following this is the very last in Hardcore Software. More on what comes next after the Epilogue.

    As a thank you to email subscribers of all levels, this post is unlocked for all readers. Please share. Please subscribe for updates and news.

    Back to 106. The Missing Start Menu

    In 2010, operating in complete secrecy on the newest part of Microsoft’s campus, the Studios, was a team called WDS. WDS didn’t stand for anything, but that was the point. The security protocols for the Studio B building were strengthened relative to any other in the entire engineering campus. Housed in this building was a team working on one of the only projects that, if leaked, would be a material event for Microsoft.

    WDS was creating the last part of the story to reimagine Windows from the chipset to the experience.

    When we began the project, it was the icing on the cake. After the Consumer Preview, it had become the one thing that might potentially change the trajectory of Windows 8.

    As Windows 7 finished and I began to consider where we stood with hardware partners, Intel, the health of the ecosystem, and competing with Apple, I reached the same conclusion the previous leader of Windows had—Windows required great hardware to meet customer needs and to compete, but there were structural constraints on the OEM business model that seemed to preclude great hardware from emerging.

    At the same time, the dependence on the that channel meant there was no desire at Microsoft to compete with OEMs. In 2010, the Windows business represented 54% of Microsoft’s fiscal year operating income and Office was 49%—yes you read that correctly. BillG used to talk about that amount of revenue in terms of the small percentage of it that could easily fund a competitor or alternative to Windows. The “Year of Linux” was not just a fantasy of techies but a desired alternative for the OEMs as well. So far the OEMs had not chosen to invest materially in Linux, but that could change especially with an incentive created by Microsoft’s actions.

    Like my predecessors, I believed Microsoft needed to build a PC.

    Building PCs was something BillG was always happy to leave to other people. In an interview in 1992, Bill said, “There’s a reason I’m the second-biggest computer company in the world…. The reason is, I write software, and that’s where the profit is in this business right now.” On the other hand, the legendary computer scientist and arguably father of the tablet concept, Alan Kay, once said, “People who are really serious about software should make their own hardware.”

    Microsoft was founded on the core belief that hardware should best be left to others. In the 1970s hardware was capital intensive, required different engineering skills, had horrible margins, and carried with it all the risks and downsides that pure software businesses, like the one BillG and PaulA had pioneered, did not worry about. With a standardized operating system, the hardware business would quickly consolidate and commoditize around IBM-compatible PCs in what was first a high-margin business that soon became something of a race to the bottom in terms of margins.

    Microsoft’s fantastic success was built precisely on the idea of not building hardware.

    BillG was always more nuanced. He and PaulA believed strongly in building hardware that created opportunities for new software. Microsoft built a hardware device, the Z-80 SoftCard, to enable its software to run on the Apple ][. Early on, Microsoft created add-in cards to play sound. PaulA personally drove the creation of the PC mouse, the famous green-eyed monster. Modern Microsoft built Xbox, but also Zune and the Kin phone.

    Apple built great hardware and together with great software made some insanely great products.

    To build hardware in this context meant to build the device that customers interacted with and to build all the software and deliver it in one complete package or, in economist’s parlance, vertical integration.

    Mike Angiulo (MikeAng) and the ecosystem had the job of bringing diversity to the PC ecosystem, a diversity that Apple did not have. This diversity was both an enormous strength and the source of a structural weakness of the industry. PCs in any screen size or configuration one might need could always be found or even custom built covering any required performance and capacity. If you wanted something like a portable server or a ruggedized PC for a squad car or a PC to embed in 2 Tesla MRI, Windows had something for you. Apple with its carefully curated line essentially big, bigger, biggest with storage of minimum, typical, maximum across Mac were the choices. Even a typical PC maker like Dell would offer good, better, best across screen sizes and then vary the offering across home, small business, government, education, and enterprise. Within that 3 x 3 x 5 customization was possible at every step. This is the root of why Apple was able to have the best PC, but never able to command the bulk of the market.

    The idea of vertical integration sounds fantastic on paper but the loss of the breadth of computing Windows had to offer was also a loss for customers. It is very easy to say “build the perfect hardware” but the world also values choice. One question we struggled with was if the “consumerization” of computing would lead to less choice or not. In general, early in adopting a technology there is less choice for customers. Increased choice comes with maturity in an effort to obtain more margin and differentiate from growing competition.

    One bet we were making was that Windows on ARM and a device from Microsoft was the start of a new generation of hardware. It would start, much like the IBM PC 5150, with a single flagship and then over time there would be many more models bringing the diversity that was the hallmark of the PC ecosystem.

    That is why we never built anything as central and critical as a mainstream PC, and never had we really considered competing so directly with Microsoft’s primary stream of profits and risking alienating those partners and sending them to Linux. The Windows business was a profit engine for the company (and still is today) and that profit flows through only a half dozen major customers. Losing even one was a massive problem.

    Microsoft had also lost a good deal of money on hardware, right up to the $1.15 billion write-off for Xbox issues in 2007. Going as far back to the early 1990s and the original keyboard, SideWinder joystick, cordless phone, home theater remote, wireless router, and even ActiMates Barney our track record in hardware was not great. Microsoft’s hardware accessories were at best categorized as marketing expense or concept cars. It was no surprise my predecessors backed off.

    Like the mouse, the sound card, and perhaps Xbox, I was certain that if we were to succeed in a broad platform shift in Windows that we would need to take on the responsibility and risk of building mainstream and profitable PC devices. We tried to create the Tablet PC by creating our own prototypes and shopping them to OEMs as proofs of concept. We repeated this motion with the predecessor to small tablets called Origami, same as we did for Media Center. Each of these failed to develop into meaningful run rates as separate product lines even after the software was integrated into Windows.

    OEMs were not equipped to invest the capital and engineering required to compete with Apple. As an example, Apple had repurposed a massive army of thousands of aluminum milling machines to create the unibody case used in the MacBook Pro. Not only did no OEM want to spend the capital to do this, but there was also no motivation to do so. Beyond that, the idea of spending a huge amount capital up front on the first machines using a new technology until a process or supply chain could be optimized was entirely unappealing even if the capital was dedicated.

    The OEMs were not aiming for highly differentiated hardware and their business needs were met with plastic cases that afforded flexibility in design and components. In practice, they often felt software was more differentiating than hardware, which was somewhat counterintuitive. The aggregate gross margin achievable in a PC software load was a multiple of the margin on the entire base hardware of a PC. The latest and coolest Android tablet was a fancy one made by Samsung, with a plastic case. The rise of Android, a commodity platform, all but guaranteed more plastic, lower quality screens, swappable parts, and the resulting lower prices.

    The OEMs were not in a battle to take share from Apple. They were more than happy to take share from each other. Apple laptop share was vastly smaller than the next bigger OEM making Windows laptops. Each OEM would tell us they could double the size of their entire company by taking share from the other OEMs. That’s just how they viewed the opportunity. The OEMs were smart businesspeople.

    Thinking we needed to build hardware, then building it, was one order of magnitude of challenge. Choosing to bring a product to mass market was another. Hardware is complicated, complex, expensive, and risky—risky on the face of it and seen as risky by Microsoft’s best customers.

    The Surface team, organized within the Entertainment and Devices division home to keyboards and mice, finished the first release of their namesake computer. It was a Windows Vista-powered table like the ones popular in bars, pizza places, and hotel lobbies in the 1980s when they ran Frogger. The new platform software provided the cool demos for Windows 7 touch. Surface came about from a research project rooted in long term efforts around optics and display technology. The effort was productized as the original Surface, with Panos Panay (PanosP) brought in to help accomplish that from the peripherals group. Unfortunately, the commercial viability of the table was limited. RobbieB, the executive of the division (that also included Xbox and phones), was looking to offload the effort or better said make Windows pay for it.

    We would move the team over to take on our hardware challenge if we could figure out an arrangement that would not torpedo the Windows business. It was this step that put a halt to all previous projects.

    Managing what could become a first-party hardware effort from within the Windows team posed significant challenges, even obstacles. PC OEMs would rightfully become unglued if they believed the decidedly limited, information they shared with MikeAng’s ecosystem team about their plans was shared with other OEMs. If their information were to be shared with a first-party team, that was even worse. Some of the earliest concerns expressed to regulators about Microsoft had to do with the walls, or lack thereof, between different parts of the Microsoft ecosystem and competitive parts in Microsoft, for example, between Windows and Office, where Office might have an unfair advantage. I had no desire to further a conspiracy theory.

    JulieLar offered her own set of challenges, but almost the opposite in nature. She was concerned about having a first-party hardware team that acted like an OEM more than a part of Microsoft. She wanted our hardware to fully embrace, not mangle, the Metro design language. I hardly wanted a team that would embrace the economics of OEM, such as crapware and expensive accessories.

    Collectively we wanted a decidedly-non OEM to be our OEM.

    Both Julie and Mike spent a lot of time with Panos to help us all arrive at a structure and work process that was set up for success. Panos joined Microsoft in 2006, relatively recently, after working at NMB Technologies, an expansive 60-year-old Japanese maker of electronics components including keyboard switches and skateboard ball-bearings (that I used to pay top dollar for in high school.) The complexities for Panos in jumping into the thicket of Windows were significant, not to mention the delicate nature of first-party PCs. The concerns for all parties were legitimate.

    The idea of a whole hardware team moving over, rather than building one organically, was worrisome, but we did not have the time to start from scratch. By most any measure, the idea that in about 30 months we would have a new ultramodern PC built on all new components the industry had never used before by a team that had only made a table computer was crazy. Panos’s commitment to being part of the massive effort was not just significant but deeply sincere. He saw the opportunity the same way we did—a chance to reimagine Microsoft.

    Working for Mike, Panos would expand the team in every dimension. Hiring and growing engineers for hardware, firmware, mechanical, plastic, manufacturing, acoustic, industrial, and safety, with designers from graphical to industrial to packaging, along with all the support functions I neglected to mention in that long list. We even had to embark on some special construction in the buildings to account for the needs of the kind of equipment required to build a PC.

    WDS was formed in June 2010 just as we were beginning to code Windows 8. We did not even send out an org announcement mail, keeping with the secrecy of the project. The name “Surface” moved over with the team and would naturally stick with the device when we were ready to bring it to the public. For now, codenames and codewords were in order. Not just one codename, but an entire system of codenames. We had codenames for every part including unique codenames used for each vendor – part combination. There were codenames for every presentation. Over the course of the project, we maintained over 200 codenames. Why? The secrecy was to maintain our commitment to keeping information separate.

    It is worth noting, we were piloting ARM work on NVIDIA Tegra hardware and seeding any groups across the company with ODM-style tablets housing Tegra components. No one inside Microsoft lacked hardware for testing, evaluating, or betting on Windows on ARM.

    The next 18 months were the most remarkable sprint in hardware development I could have ever imagined. Panos built an extremely tight and remarkably talented team to deliver products in relatively short time. Everyone stepped it up to a new level. One of the most critical aspects of assembling this team was his direct manager, MikeAng, who created a cocoon around the team, isolating them from all the forces internally. Mike also mentored Panos on the product development methods used to create Windows while apprising him of the best ways to integrate with the software team to avoid thinking like an OEM—the key problem we set out to solve.

    One of the things that Mike had to do which was super valuable for the whole effort was to be the “vault” for information about Surface and information about the PC ecosystem. The two orgs, Ecosystem and Surface, could have absolutely no information leakage between them. In many ways, the integrity of the Windows business model was in Mike’s hands. It was easy to trust Mike since I’d worked with him since he joined Microsoft out of college, and his post-Microsoft career included becoming a lawyer, so I think it is fair to say he was the perfect person for this role. As we’ll see it was just a bonus that he was exactly the right kind of engineer for this role as well.

    The core of the WDS team had previously worked together, but the scale of their previous projects was small by comparison—everything was bigger in Windows. The team doubled from the early days of mice and keyboards to a broad range of peripherals and most recently Xbox consoles. Ten years in, Xbox was selling about ten million units a year (about 800,000 in 2021) and the business remained roughly break-even due to console margins. We had much grander aspirations.

    The story of the two devices, both tablet form factors, that became the first Surface PCs was one of incredible design and engineering efforts going through a remarkable series of iterations in an incredibly short time and then scaling to manufacture millions. It was also a blur. Creating a PC was new for me as well. It is not only fair, but important, to say that without the collaboration across Julie’s product team, JonDe’s engineering team, GrantG’s test team, and Julie herself, we would not have Surface. Jensen Harris (JensenH) and the entire UEX development provided an attention to detail on everything from the BIOS boot screen the on-screen keyboard, sound scheme, to the out of box experience that rivaled anything Apple did. It is one thing to build a reference PC within the walls of Microsoft—something done many times before. It is entirely another thing to deliver the innovation and scale of products that we did, and then bring it to a worldwide market.

    I cannot stress enough how much of a whole-organization effort Surface was.

    We did not set out to build what the industry was calling a consumer tablet or a tablet for consuming information and lightweight computing needs, as many, or just Apple, characterized the tablet market relative to the PC market. The world did not need more of that. Microsoft’s heritage and business rested with productivity, so the overarching goal was to create a PC that was great for productivity, creation, and mainstream information work. The PC would just happen to fit the current definition of a tablet, which was smaller than an 11.6” laptop and usable as a screen and a traditional laptop posture.

    To justify building our own hardware we needed a unique point of view.

    We believed that if we could build a tablet that worked fantastically well for the web and Office, while being ultra-lightweight and ultra-reliable, we could redefine the device that constituted the bulk of PCs used in school and work. One of our most significant challenges was that the narrative for new devices was a dominated by tablet, meaning a keyboardless slate focused on consumption. This created the situation that even if we achieved our vision technically, we had an enormous communications challenge.

    The first Microsoft device, code-named GT or Georgetown for no real reason, was a premium ARM tablet able to deliver on the promise of PC-style mobile productivity. The whole point of building hardware as a first party was to do what the PC ecosystem would likely struggle or simply decline to do, and ARM was first among the challenges faced. The PC makers had all been struggling to enter the smartphone market, and ARM required a significant amount of product engineering and investment that was inconsistent with the low-priced mindset. Besides, they were fully occupied with Netbooks and later Ultrabooks and seeing little success with smartphones. As described in section 101, those OEMs working with ARM were genuinely enthused and supportive, but somewhat like Detroit’s reaction to electric cars, the internal tension proved too great to create and bring to market ARM, at least for the launch of Windows 8.

    Whether premium implied premium price along with a premium experience was an ongoing debate. Typically, in hardware designs, manufacturers work backward from a retail price goal, assuming a certain BOM, margins, volume to manufacture (impacting the BOM or bill of materials), marketing and sales costs, returns, and so on. As product makers, this makes intellectual sense, but it frontloads so many constraints that rarely do great products emerge. I chose to start from the user scenario and see how far we could go and what we could get for what price, knowing that as we converged difficult decisions would arise.

    This was a point of constant discussion with SteveB, who—and this was not news or even unique to this project—favored low prices and high volumes. Gaining market share was always preferable and what had gotten Microsoft to where it was. I felt this time we needed to take a different approach—reimagine what a Windows device could be even if that meant we start at the high end, perhaps like Tesla was doing at the time with Elon Musk’s famously not-so-secret secret plan to iterate to a lower price with greater scale, funding that iteration with earlier high-priced products.

    Early in the process, Mike and Panos set the tone for the role WDS would play in the overall strategy for Windows 8. It was extraordinarily helpful. The mantra for the joint efforts of hardware and software coming together was expressed as, “our job is to build a stage, a stage for the operating system and software.” This fit well with the whole design for Windows 8 software, which itself was viewed as a stage for apps and user creations. The idea that every layer of the system was trying to get out of the way of the other layers above aligned entirely with the Metro design language, and also exactly what no one ever did, especially OEMs. This was in stark contrast to typical PCs and even typical OS iterations of the past, which felt the need to announce themselves whether via branding (those Designed for Windows or Intel Inside stickers, for example, that don’t even appear on Apple laptops because they drove Steve Jobs crazy) or via icons, popups, or notifications.

    We were not designing the hardware experience after the software experience (or before); rather, we were designing the hardware and software experience together—a reimagined experience.

    I tried to keep my own role in check. I’d seen many past hardware cycles within the company where there was so much going up and down the chain, so much approval, and so much tweaking by executives. Plus, it is crazy easy to have an opinion on a device and act like some enlightened big-shot executive. Panos’s experience with executives at Microsoft rightfully made him wary of me at first. Executives often had an outsized view of their contribution while underestimating the amount of work, backchannel, and eyerolling they caused. Advertising, pricing, user interface design, and feedback on hardware were places that executives easily meddled. And for that, teams suffered.

    I used a lot of different laptops. I bought nearly every iPad typing cover available (and there were dozens). I was a road warrior, so I was real-time testing. I used browsing, email, and Office a lot. I could have probably justified my own opinions, and many times I really wanted to, but had to resist the urge. I had asked Mike and Panos to pull off what by any measure would be a Herculean task. My responsibility was not to meddle.

    From my vantage point in developing the product, there were five inflection points in the design process—a high-altitude view since there were thousands of choices made in developing the product. At some level these were choices to be made or constraints. The project could have easily become impossible or careened off path with so many potential degrees of freedom. Anyone who looked at the typical roundup of PC laptop reviews for back to school or holiday was left browsing a dizzying array of specs, ports, dimensions, and component choices. It would have been easy to get lost in all these decisions to be made. Each one of those decisions is related to many others in an ever-complex set of linear equations to be optimally solved.

    The team picked these key points in the design process: chipset, the productivity posture, typing, materials, and peripherals.

    By establishing the initial platform of ARM and a sub-11” form factor many of the traditional “spec” issues of designing the device would fall out. But this understates just how many and how fast, Panos and team needed to put constraints in place—constraints that would maintain a point of view and deliver a product.

    The chipset was perhaps the most straight-forward choice for the team to make on technical grounds, though hardly without controversy or for lack of a better word politics. There was a good deal of pressure from the Windows Phone team and from some OEM partners who were still trying to build phones of their own (though on Android now) to go with Qualcomm. Qualcomm was viewed as a safe choice and the choice with the most intellectual property and lawyers to support that. On the other hand, the early hardware was far from ready to build a PC especially the graphics support which was so critical. It was the first taste of developing a product where the choices had implications beyond building the best experience. My own view from dealing with Qualcomm was that they were more concerned with the volume of devices we would commit to up front than with the quality or innovation we would deliver, even early in the process.

    The teams working across Windows porting to ARM were favorable to NVIDIA given the choices we had. In particular, NVIDIA was strongest on graphics which were key to the Metro-style experience. The underlying graphics chips mapped closely to our DirectX API and made developing device drivers supporting the animations and effects we needed with the performance we required as straight-forward as could be given all that was new. Aside from that NVIDIA was a joyful partner and great at working with ODMs to supply test hardware.

    Ralf Groene (RalfG) led hardware design efforts for WDS. He joined Microsoft with a classic design education and experience at several marquee design houses in Silicon Valley. If we were casting a movie for an industrial designer, Ralf would be perfect. RalfG and the design team were cycling through prototypes of sizes and overall form factors at an incredible pace. Engineering for productivity and typing fell to the design team, with a very strong collaboration with the UEX team to make sure the software could gracefully adapt to the presence or absence of a keyboard and trackpad.

    The kickstand was such an amazing choice. Yet I lack a clear recollection of the genesis of it, only that the moment I saw it I was convinced it was an inspired choice. Having a kickstand solved the main problem tablets had, which is how to keep using one when you wanted to put it down or more importantly to type. The kickstand could easily have been a gimmick but in fact it became an iconic element and one that served to further the point of view centered on the dual modalities of consuming content hands-free and full-time work on the go. The kickstand would have tradeoffs in real-world usability, especially with the larger devices to come later, but the for the first generation I felt it was exactly the tradeoff to make.

    The design team iterated with a kickstand, something that had been found on some early media players and phones. Should the kickstand be two small legs, one fatter leg, or something else? Should it support both landscape and portrait? The team settled on a full-width kickstand, almost a fold-out foot that provided a rock-solid level of stability. That stability came from a unique hinge design, created entirely by the mechanical engineers on the team. Opening and closing the hinge had the feel and robustness of a luxury car door, a German one of course. As the hinge closed the air dampening effect would give the motion a soft cushion with a nearly inaudible, but intentionally there, sound of luxury. I once spent time in the lab with the mechanical engineer on the team amazed at just the complexity and number of choices that were considered in the design and fabrication of the hinge. It was a great example of how every small detail is vastly more complex than readily apparent.

    The beauty of the hinge and kickstand were that they reduced the thickness of the whole productivity scenario by quite a bit—the plastic cases of the iPad including Apple’s own cases were true compromises. The 2015 Folio, which added two layers of material in a complicated origami-folding design and later the 2018 Magic keyboard, which was just heavy at 600 grams or 130 grams more than the iPad itself, were both awkward. The latter keyboard added so much weight and thickness to the iPad that it was more practical to carry a laptop. Whereas Steve Jobs had to sit in a chair holding the iPad, with Surface one could hold it or flip out the kickstand and watch the move while enjoying popcorn, or having a video conference with the integrated HD webcam angled precisely enough so it would capture your head and not neck or below when placed on a table.

    Important for the productivity posture was the overall size. When working, a screen can never be too big, but when traveling a screen can never be too small. Finding the balance was super important. The math would work out super well to have a screen that was 10.6-inch diagonal but with a widescreen layout of 16x9, the typical HD or Blu-Ray numbers. At 1366x768 the screen was equally optimal for the new Windows 8 snap view to show two apps side-by-side—GT as a stage for Windows 8—and also to display movies full screen without the drawbacks of letterbox modification that consumers hated.

    There was quite a bit if a debate over 16:10 versus 16:9 aspect ratio and the possible screen resolutions. UEX favored the extra width of 16:9, though 16:10 offered more vertical space for productivity but could not support side-by-side apps. The availability of components and cost of using the UEX preference made it a tough choice. This was also an area where we were going against the iPad trend which picked a more traditional even old 4:3 aspect ratio used on NTSC TV and VGA monitors at 1024x786 which was also criticized in reviews for letterboxing widescreen video. The iPad was between wide screen video and the traditional trade book and Amazon Kindle aspect ratio of 3:2.

    From a supply chain and manufacturing perspective, the choice was more complex. The supply chain for screens was following the volume of all manufacturers combined. No single manufacturer could simply order a size not being used by others without committing to volume. In 2011 the dominant size for tablets was a 10.1” screen with 1280x800 resolution, a 16:10 aspect ratio. We saw the 1280x800 resolution as a waypoint, picked for price by the Android tablet ecosystem. It was neither good for productivity in landscape nor good for reading in portrait mode.

    We were deeply committed to having a screen that supported the side-by-side view of two apps in landscape. This came with a real tradeoff, however, in the usability as a pure tablet. In other words, even from the choice of screen we were optimizing for productivity. When used as a “reading tablet” the portrait mode orientation of 9:16 was particularly awkward. Reading with one hand holding the device was tiresome, like holding a college chemistry textbook in bed. Additionally, traditional laptops with 13” or 15” screens had moved in large numbers with Windows 7 to the 16:9 and 1366x768 resolution, indicating that resolution was an excellent design point for apps and would be around for years to come.

    The screen had two other technical issues to work through with significant impact on the experience. First, the core display technology for LCD screens was in the midst of a transition from MVA (Multi-Domain Vertical Alignment) to IPS (In-Plane Switching) with the latter being the newer technology. Going with the newer technology would be preferred as a basis for applying a touch sensor, but that further constrained the potential supplies and sizes.

    Second, the display panel would need an extra stage in manufacturing to attach a touch panel sensor. There were two approaches here as well. The standard approach, again for Android tablets, was known as air-bonding. In this technique the touch panel does not directly touch the underlying display panel leaving a small gap which eases manufacturing. Unfortunately, this also introduces parallax, an effect by which where you touch and where the sensor detects your touch do not always align in your brain. This is seen at check-out counters and ATMs which prefer the manufacturability and low-cost of this approach. While cheaper, it is a disaster for precision work on a PC. The newer approach, direct bonding, used by Apple and more premium tablets, directly bonded the touch sensor to the display. This was more expensive and had a non-trivial defect rate increasing costs as well.

    The way you know from the supply chain that you are asking for something difficult is that the price goes up and delivery goes further out. The suppliers, all singing from the same page, strongly encouraged a 16:10 screen with 1280x800. Their view was that this was the volume tablet resolution. After many trips and many meetings, we were able to secure a high-volume supply of the 10.6”, 1366x768, IPS, direct bonded screen. This was a tough choice and added early risk, but once you lock in a supplier as a volume partner risk finds a way of decreasing. I suspect over time even for the 10.6” screen we would have gone with a more comfortable aspect ratio, but by then we would have had more answers to windowing and productivity than we had with Windows 8. I’m getting ahead of myself.

    Productivity without a keyboard is not really productivity. We were not confused about this point as they were in Cupertino. Productivity without a mouse or trackpad, on the desktop, was not even possible. Because Windows RT had the desktop and desktop Office, a trackpad was required. The 10.6-inch diagonal screen would create a case with a 10.1-inch long edge, which by no accident was wide enough for a nearly full-size home row of keys for touch typists. This could be a huge advantage over the keyboard for the iPad, which was only 9.5 inches across, making a big difference in key size and spacing. Our screen size allowed for arrow keys and a row of function keys with the trackpad. These small differences bring a world of benefits when designing devices with which people have such intimate reactions. This is decidedly different than a desktop PC or even a large laptop. There’s something about a device that must be held that makes small decisions so much more important.

    The keyboard presented a unique opportunity to innovate to maximize portability. Panos himself personally an expert in mechanical keyboards and part of Microsoft’s own efforts in innovative keyboards was well-positioned to develop two keyboards we would introduce with GT.

    Fundamental to the keyboard was the idea that it would be faster and more accurate than touch typing on a glass screen and include a trackpad for precision, or “professional,” pointing as so many reviews of iPad noted was lacking. Since GT had a kickstand, the keyboard did not need to also serve as a redundant cover for the back but when folded to cover the screen would offer screen protection. The underside of the keyboard was covered with a layer of what I always termed northwest fleece with a soft woolen feel when carrying GT around in your hand.

    Our folding keyboard could make an easy transition from productivity and typing to reading or watching while lounging around. The keyboard itself was a touch-based invention from Stevie Batiche (StevieB), who was the resident scientist/engineer leading all things touch and display, and a genius. We would also add a very thin mechanical keyboard, one of the thinnest mechanical keyboards ever released to market. Our hearts would be with the touch keyboard at launch owing to its productivity and on the go scenario.

    StevieB’s ultrathin keyboard invention was essentially a touch panel with ever-so-slightly raised key outlines impressed on the panel. The electronics were created by laminating several layers of materials including a touch sensor and all the “wiring” together in a hot press machine—sort of a touch screen sandwich. The keyboard, called Touch Cover, was a mere 3.2mm and weighed only 7 ounces and under 200 grams. Given the dimensions of the screen, Touch Cover was able to incorporate a trackpad on par with many small laptop trackpads and a complete set of laptop keys including dedicated keys for Windows 8 charms along the top row, which also doubled as traditional PC function keys. In US English even the F and J keys had slight outward impressions for traditional typists to find the home row. There would be many skeptics about the keyboard when it came to typing proficiency. While there were many risks in the entire project, at the time Touch Cover seemed like the riskiest part of the project. It would be so immediately visible and open to snap opinions. The press that typed for a living were used to their preferred “feel” when it came to laptop keyboards, often a sore spot for just about every Windows laptop review.

    GT had a beautiful hinge and an incredible touch keyboard, but how to attach the keyboard posed another challenge. Because of the hinge the cover could not be attached to the back of GT, but that would be dumb anyway—adding weight with no purpose was what we saw on the iPad. Once again, the mechanical engineers had a go at the problem. They put magnets to work. Not just any magnets, but a series of magnets of exactly the right strength to support the device, even swinging it from the keyboard, while also easy to remove. The magnets did not just attach the keyboard to the tablet but they were the connectors for the signal and power. Attaching the keyboard had to perfectly align the connectors or it would not work.

    The magnetic keyboard was a more difficult to path to perfection and reliability. Upon first seeing it, just as most others, I was skeptical. Would it fall off? Would it properly align even when connected in a sloppy manner? Would flipping it around to use as just a tablet prove “goofy” or would it really work? Would Windows correctly enable the on-screen keyboard at the right time and get out of the way entirely when a keyboard was attached? Would it work just as well for the thin mechanical model?

    The process of connecting the keyboard to the device proved to be more than just reliable but something of a signature. The first time we showed the Surface to the team that would lead creating television commercials they immediately connected with the “click” sound the engineering team worked so hard to get right. That click would become the centerpiece of the initial campaign along with the profile of GT and the keyboard. A subdued version of a magnetic connector also with a click would be used for the very small charging adapter. Ironically, Apple’s pioneering MagSafe connector that was so popular on MacBooks was not used on iPad, which took its lead from iPhone for charging. Even today Apple continues to struggle with magnets when it comes to covers and keyboard cases for the iPad. The new 2022 iPad typing portfolio with a kickstand is kind of a mess.

    The removable keyboard provided several advantages. To compete in the “pure” tablet market the keyboard could be priced optionally and also be reviewed as an accessory versus required. It also permitted personalization by choice of colors since the laminating process was able to substitute any color materials for the back or front. The back of the Touch Cover could be customized in both material and look, for example corporate logos or art as we later offered. Finally, StevieB’s innovative touch panel was not restricted to keys and could be used to create any touch surface. Since it could be removed easily, we envisioned the potential to have specialized Touch Covers dedicated to specific applications. One example we showed early on was a synthesizer drum app mated to a custom keyboard. At a small, private event held one night in Los Angeles I even had a chance to watch actor/musician Zack Efron of High School Musical have fun with the prototype music generating cover.

    The materials choice for inner fame and device case were almost always where the OEMs made choices that were best for the bottom line—plastic provided low cost, light weight, rigidity, ample room for cooling, and agility across component and peripheral changes. This is where Microsoft’s ability to provide a significant investment could make a real difference in the final product. The engineering challenge with case materials is the triangle of cost, weight, and rigidity. Cost is not only the cost of the material, but the manufacturing cost of making the case, with all the curves, holes, and cutouts that make it a computer. Materials can be inexpensive and lightweight but too flexible to be durable—the screen is glass and needs structure to prevent it from flexing and breaking. Materials can be rigid and lightweight but cost a tremendous amount, such as ceramics used in fighter jets. Aluminum is lightweight and relatively rigid, but to bring the cost down Apple invested a huge amount of upfront capital in order to take blocks of aluminum and use mechanical milling to turn it into a PC.

    We needed the device to be as thin as possible and from the start we considered every fractional millimeter we could save. We knew that bringing over Windows in one product cycle, including the desktop, would bring real challenges to our ability to compete with the nine hours of video playback possible on the iPad. We needed every available millimeter for battery. An innovation in materials could prove a game-changer.

    In research, the materials scientists became intrigued by a relatively new process of injection molding a magnesium alloy. Such a material was expensive and the manufacturing process complex, but the resulting parts were lighter than machined aluminum, extremely rigid and strong, and created by more flexible molding. At this point, the injection molding process had only been used for small parts such as watch frames or jet engine components, but the materials partner, Mike, and Panos’s team thought it possible to use for the much larger cases. The material could be readily colored via a permanent and robust vapor deposition process, which afforded other opportunities.

    MikeAng, a mechanical engineer by training and certified alloy welder, dove into the process, visiting the factory in Asia. He returned with movies of cases being injection molded that looked like scenes from 20th century industrial America—sparks flying, molten flows of metal, giant machine presses.

    Betting GT on this new process was one of the more uncertain aspects of the hardware design, and also very expensive, relatively. The high upfront costs made us all nervous. With any upfront cost, called NRE, or non-recurring engineering costs, the only way of justifying them is to make a lot of the product which spread the cost across many devices. That was certainly our intent.

    The material would get the name VaporMg, pronounced vapor mag. Finished in a smooth, so smooth, black-gray, the material was another manifestation of the collective efforts of design, materials, and manufacturing coming together. While we originally planned to make the entire case out of VaporMg, supply constraints resulted in a more traditional aluminum frame with VaporMg used more sparingly. While that resulted in a bit of increased weight, the resulting scaled manufacturing and cost reduction were a good tradeoff.

    There’s no doubt that VaporMg was the most extravagant choice in the whole project and one I felt the most over-extended in considering. In a big company with plenty of money, it is not uncommon to see branching out into new areas take on almost comically bad cost controls relative to industry norms. Without experience or baselines to compare and with all the excitement of “those other people doing this must be dumb” it is so easy to do this. Across the product, Mike, Panos, and team had extremely good controls and constant attention to the BOM, bill of materials, and NRE. Mike was a product of our frugalness in Office and brought that to this project and team. The VaporMg choice was one I felt we should go with even though it was so uncharacteristic for me or much of Microsoft.

    One thing we considered was that we could reuse the materials process and plant capacity with other OEMs and license it to them for whatever devices they wanted to build. The idea of acting as a source for various components was a step we considered for trackpads and touchscreens as well as we saw OEMs unwilling or unable to take on the NRE costs to create competitive laptops. Once again though, I probably failed to consider they also viewed such costs unnecessary when it came to their share battle competing with other OEMs more than competing with Apple.

    VaporMg was so strong that the team turned a standard GT chassis into a skateboard, on a lark. As a former skateboarder this thrilled me to no end. At one of the many press events hosted on campus offering a behind-the-scenes look at developing hardware, I skated around the lobby of Studio A. Not to worry, I was wearing a helmet though it was a borrowed bike helmet we located just for the photos. We snapped some photos that continue to live on. To further demonstrate the strength of the material, we showed off the drop-test machine that simulated dropping a tablet with different forces and angles and even directly on the kickstand—each time GT performed admirably. Protected by the Touch Cover, the combination felt relatively indestructible. The screen was securely protected without any extra casing on the back maintaining the thin profile and light weight.

    Finally, productivity for a premium laptops and tablets circa 2012 still depended on wires or more specifically dongles. Apple’s MacBook Air approach of wireless everything was conceptually great but practically a pain, certainly in the early days. Carrying around dongles to connect to projectors or to USB devices was annoying and error prone. As a tool for productivity, GT needed to show up at a meeting and get handed the slides on a USB drive, pop the drive in, launch PowerPoint, and then connect to a projector, or handle devices like microphones and speakers. As a stage for Windows 8, we had optimized this type of flow from a performance and user interface perspective—get to the user’s work product and get to work as quickly as possible.

    Mike and Panos framed GT as a device that should connect in the way people needed it to connect. GT had a standard USB port, standard audio output, and a mini-HDMI port in addition to the dedicated magnetic-charging connector. The USB port was rather tricky as it set a minimum thickness that was not an issue on the iPad that did not have a universal USB connector. The fat USB port was a bit ugly and some on the industrial design team referred to it as a “missing tooth” in an otherwise sleek profile. At one point we were so close to the standard for a connector we risked losing the trade group designation and permission to use the logo. This in addition to strong support in hardware and software for all the current wireless protocols for Wi-Fi and Bluetooth including support for wireless speakers and displays. This array of connectivity defined the soul of the device and shouted out point of view that the device was not a peripheral but a full computer.

    Power adapters had become the bane of my existence as someone who used a variety of devices at home and work and travel. It was still too early to have a standard connector—Apple was still using the wide pin connector, which was painful and finnicky despite the super nice magnetic connector. GT created an ultra-skinny magnetic connector. I was, however, much more interested in the power brick. I could not stand the typical PC that had a three-foot cord to the PC, then a brick, then another three-foot cord to the wall. It turned out people in Japan disliked those too and Yodobashi Camera came to the rescue with an aftermarket model, a single 2 meter cable with a brick at one end with folding prongs. I stocked up on those. I really wanted GT to have an adapter with simple folding prongs. I wanted this so much I didn’t even bother to ask about non-US plugs. As a constant traveler, I was also cursed with hotel rooms with too few outlets. Back then I almost always had to unplug a lamp which usually involved moving furniture around. I often traveled with an after-market phone charger that had an extra pass-through outlet to charge a USB device. I needed this because PC USB ports did not charge when the PC went into standby, which I usually discovered at 6 a.m. local time.

    Okay, so I meddled once after complaining one too many times.

    One day RalfG invited me over to the studio and brought me through dozens of models for power adapters. I knew as soon as I got to the studio that Panos was managing me, but in a way that was entirely okay given how close we’d become. As we walked from 3-D model to model, I opined and envisioned about what it would be like in a hotel room. They explained to me all sorts of stuff about thermals, tolerances, UL standards, and even patents. One company had a patent on prongs that folded into power bricks! I picked the smallest adapter with folding prongs and a built-in USB charger. I like to think this was another iconic choice. It was not. It was merely useful. Ultimately, GT did not have a USB port on the charger but did have folding prongs. The second device had the USB port I wanted on a slightly higher-powered charger.

    From the project start we had been considering a second device. Early on, Mike had suggested to me that the team could build a desktop all-in-one using a screen with the next generation touch panel technology from the Surface table running an Intel processor. This device was the concept for a follow-on or successor to Surface table. It was a beautiful device. The prototype followed the same lines and hinge as would appear much later after many iterations as the Surface Studio all-in-one.

    We really wanted to showcase desktop productivity, especially the all-in-one form factor, which had no presence with PCs. Apple made all-in-one Macs as their desktop Mac and those, as with iPads and MacBook Air, made their way into every movie, high-end retailer, and boutique office setting. An all-in-one with the GT premium aesthetic could put Windows in those spots too.

    PC makers finally decided to wed the displays they were making with the laptop parts they made to create all-in-ones competitive with Apple iMac. Windows 7 saw a broad introduction of low-cost and reasonably performant all-in-ones, which was great to see.

    The arrival of PC all-in-ones and the reality that the next generation screen technology was not ready, especially in a large size and at scale led to us abandoning the all-in-one product. A good deal of work had gone into the basics of developing an Intel PC, which presented us with an opportunity.

    Standing around the now defunct all-in-one prototype, we discussed the idea of creating an Intel-based GT with the Surface BOM, or a derivative of it. At first, I thought this was entirely against the spirit of the project as it would simply compete with our OEM partners and cause more of a rift than not. At the same time, I’d been so happy with the progress we were making while equally aware of the limitations customers would see in ARM-based Surface. The early prototype tablets like the \build Samsung had the promise of being great tablets for using and testing Windows 8 apps while also working as platforms to build those apps or use any existing x86 Windows development tools.

    Would the existence of a Microsoft-branded Intel device help address customer concerns, or would it create more confusion? Would OEMs see such a device as an inspiration, or would it further annoy them? What would Intel think?

    I had many concerns about forging ahead with an Intel-based PC coming from Microsoft. It did not seem prudent. Frankly, it felt more like an extra poke at OEMs when our main goals were embodied in the ARM-based device.

    What could we offer uniquely as a stage for Windows 8? The Ultrabook investments were ongoing, but Intel’s specifications called for them to be large-screen devices for Intel-specific channel strategies. Everything that was a tablet or Netbook-like computer, less than Ultrabooks, on Intel was running low end chips that had mediocre performance on Windows. What if we mated mainstream laptop chips, Intel Core, to the GT form factor? We would have an ultraportable Intel-based Windows 8 device in an ultra-convenient Netbook-sized package. There was nothing like it in market, an ultra-portable Ultrabook.

    The team took this as an opportunity to up-level all the specifications from GT. More of everything including a full HD screen, faster USB, support for dual monitors, bigger battery, more memory, and storage, and by default it would use the Type Cover with mechanical keys. We were super confident in the Touch Cover, but we saw in early usage that it did not have the throughput for a full-time professional user on this professional-oriented device. We weren’t going to be religious about it and quickly adapted.

    The up-leveled GT could in fact contribute something that would help enormously to smooth over the risk of building hardware. We could offer a super high-quality pen experience, exactly what BillG wanted. Since the device would be more portable, resembling a typical paper notebook, lighter than scarcely available Tablet PCs, it might prove to be the ultimate pen-based notetaking device. The fact that we could focus an Intel SKU on the pen and handwriting and deliver a size that the OEMs were not considering provided a reasonable justification in favor of the device plan. The up-leveled GT filled an obvious gap in the whole ecosystem lineup.

    Why did we not put a pen on GT? The power draw and thickness of another layer added by the current state-of-the-art pen digitizer would impact the competitiveness with iPad. Prioritizing the ARM device to be competitive with hardware specs of the iPad was a key factor, and a good tradeoff for us to make. Given my own reticence about the pen it was also an easy tradeoff, though I had to repeatedly defend it with BillG.

    This was enough positive and with MikeAng we considered or gamed out how the OEMs would respond to an x86 device. Our main point was that the OEMs were focused on Ultrabooks in order to gain the advantages of Intel pricing. Our device would fall outside Ultrabook specs and thus prove more innovative and less part of the crowded Ultrabook space.

    From my perspective, I viewed the device as an objection handler. It was a way to signal to the market that Windows 8, the x86 product, was well-suited to playing in this new world. Instead of a focus on ultra-mobile and all-day battery life, the focus was on power and all the needs of professionals in an ultra-mobile package. This reflected the early reviews of Windows 8 and the challenges we were having with the tablet narrative as put forth by Apple and reviewers.

    GTX, or Montlake, was the code name for this second device. It looked bit like a puffed-up GT. Unlike GT, GTX would have all standard Windows PC components from the chipset to the BIOS to storage and RAM. It would also have a fan, and to mitigate that the team developed innovative cooling slots all around the device to keep internals from overheating or creating a hot-to-the-touch exterior. By virtue of it being Intel based, GTX had noise and heat that GT did not, and had about half the battery life more typical of the real-world numbers being seen for the first generation Ultrabooks, about 4-5 hours, compared to the 9+ hours we would see with GT.

    GTX was about three months behind GT. I was clear that GT was the priority should trade-offs in any dimension be required. By the time we announced GT we would announce that GTX would be available three months later. For many that gap felt amateurish compared to the most recent Apple launches which had announced new devices on a Tuesday for sale on a Friday. As we know now, that was not the long-term norm for Apple nor was it how the iPad, Apple Watch, or other devices launched since. Nevertheless, Apple was doing everything right at the time so as agile as I thought we were, it was still less than was expected of us.

    By the early spring of 2012, the manufacturing team started making small runs of test devices. It took months of work to scale up an assembly line, and it was expensive to make devices in any quantity for testing. Plus, everything was top secret. We had made about 1,000 test ARM devices partnering with NVIDIA, which were being used mostly in labs. They were also shared with teams across Microsoft contributing to WOA under the strictest security protocols. We never had enough of these test machines for everyone who might have wanted one to just check out, but we had enough to build the software across Windows, Developer, and Office. These devices cost over $1,000 each and were great to have but decidedly test machines. I was still using our NVIDIA test device, but, as with anyone who knew about GT, I was anxious.

    Time was passing both slowly and quickly. We were closing in on finishing the Windows 8 software schedule, August 2012, which meant time was slow as each day fewer and fewer code changes were made. GT, however, was getting close to the point where it was time to commit to manufacturing. Spinning up a line means you want to ramp it to full capacity and not stop, otherwise we’d just be burning cash.

    Surprisingly, there were absolutely no leaks. Even across Microsoft no one suspected anything, certainly not first-party hardware. With Windows 8 RTM approaching, the overall messaging for our reimagining point of view would start to diffuse through Microsoft. That meant talking about hardware. At a regularly scheduled Board Meeting I was asked to provide an update on Windows 8. As the code was essentially done and we had broad disclosure already there was little to do but assure the Board that we were on track and to discuss the worries we all collectively shared. The Board, however, had not yet seen the final GT hardware. In many previous meetings we discussed ARM and the strategy, including building first-party hardware, and I had previously shown demonstrations using the NVIDIA testing prototype. My sense was that the gravity of the decision to make first-party hardware had not really sunk in—the general view of “competing” with our OEM partners was still viewed as a challenge, especially because the Windows Phone Team had tried to create a phone but ended up partnering with Nokia, before later acquiring the company.

    Whenever I spoke with the Board, I was always excited to show the progress of our product—so long as we were ready to, and it would not be over-promising or confusing. I would almost always drive a demo myself casually and without much fanfare—not only was this more authentic, but it also permitted a more casual interaction. With GT I did just that. At one point I just pulled it out of my bag and began to show it off just as we had been doing in incredibly limited ways. The pitch was easy because it was what we had aimed to build. Promise and deliver.

    The Board and Steve, however, were much more focused on the big decision to sell first-party hardware and the bottom-line impact to the overall Microsoft P&L and earnings. I think the Board was still smarting from the Xbox write-off five years earlier. This ended up being the discussion, which seemed awkward to me as we had articulated this as the strategy from the start of the project. At the meeting, I was starting to get the feeling that I had in 2001 when we suddenly backtracked on the fully-baked Office.NET product plan. Only this time I felt I had laid all the groundwork and had discussed this at several other meetings. In hindsight, I suppose it was the reality of seeing the actual device and that we were delivering that made it all seem that much more real.

    There was a tense discussion about the costs, especially the hundreds of millions of dollars to build out the final assembly capacity and the balance of creating enough units to keep the cost down by spreading the non-recurring costs over as many units as possible but not too many that we’d flood the market. One thing I had done specifically was not play any games with the budget or expense approval. Many projects at Microsoft spent a lot of money by spending a little bit at a time so as not to exceed an executive’s approval limit. My whole career, with the help of CollJ, I kept my spending authority artificially low. The spirit of my mentor JeffH was always looking over my shoulder reminding me it was Microsoft’s money and not mine to spend. This meant SteveB had to approve all the spending and we would have to have the discussion up front. At the end of the meeting when the Board asked the final approver question, there was no doubt in my mind. At the same time, strategically and from the perspective of how well we had executed there was no doubt. I was accountable for the result and was certain we should move forward and said as much. SteveB and I would have several more discussions, mostly centered on the price point and volume before he hit approve on the invoicing tool.

    Before the first public showing, I wanted to show the device to PaulA. Paul was an early advocate of Microsoft building hardware and had several forays into leading-edge hardware through his Vulcan innovation company. I brought a device over to his office near the stadium used by the Seattle Seahawks. As someone who said he believed the old saying about those serious about software building hardware, Paul was excited to see what the team had accomplished. We spent most of the time in a deep discussion about hardware manufacturing, the bill of materials, and how Microsoft would scale the business. Paul’s Vulcan had built an ultra-portable PC powered by a novel Transmeta chip, which aimed to be Intel-compatible and consume less power. He was profoundly connected to the challenges of making hardware and at the same time agreed on the need to do so.

    We set a date for a public unveiling in mid-June 2012. Just two weeks before at the All Things Digital Conference, we wanted to make sure one reporting team got a bit of a preview, only one team. I was using my Surface RT all the time, but not in any public places, and essentially handcuffing my messenger bag to me. D10, the 2012 conference, featured Apple’s Tim Cook, Mayor Michael Bloomberg, and a host of top-shelf guests, including SteveB, but no Windows demo, Phew.

    We prepared to give a brief and private demo at the conference. We did not schedule any time but instead sort of set up a bit of an ambush outside on the terrace of the show’s green room. I messaged show co-host Walt Mossberg and asked if we could find 15 minutes between his on-stage interviews to show him and Katherine Boehret something very cool and secret. He was intrigued and they both of course agreed.

    JulieLar and I would meet them and provide a surprise sneak peak of Surface. This was the first showing to anyone outside the Microsoft “tent” to see it. I grabbed my top-secret messenger bag from the hotel safe and met Julie and our audience of two backstage.

    I offered a brief lead up, reminding them of the Windows 8 vision and goal of reimagining Windows from the chipset to the experience. I then took out Surface RT from my bag and said, “So, we built a computer.”

    We showed them all the bells and whistles. And they were, at least by my account, speechless for a more than a brief moment. We just smiled while they took turns handling the fully functional sample.

    Then they had a lot of questions. How much would it cost? What would the OEMs think? What was the battery life? When would it come to market?

    Wait…would it even come to market? Katherine asked if it was a prototype or one of those PCs we built to encourage better PC design. We assured them that this was going to market as a first-party device.

    Walt asked about one of his most significant issues with Windows PCs, the prevalence of crapware—that software added by PC makers to “enhance the value” by offering trials, advancing PC makers extra hardware (such as buttons to launch specific software), and other add-ons that you probably don’t want but significantly impact performance. On this topic I was extremely confident in answering. The whole point of creating Surface was to show the marketplace what the very best Windows experience could be. Little did I know just how difficult it would be to maintain this seemingly obvious point of differentiation.

    While I had many fun moments described here in Hardcore Software, this surprise for a team I respected so much was one of the highlights. I wish I had a photo of the moment.

    Invitations were later sent out for a mysterious event in Los Angeles with no hints of what to expect and little notice. The venue was a Hollywood production facility, Milk Studios. More than 200 press, financial, and industry guests made their way to the sound stage the morning of June 18, 2012. Some were still complaining about the short notice, the time, the location, and the lack of detail.

    On stage, SteveB introduced Microsoft’s move to hardware, emphasizing that we would not let barriers between hardware and software prevent us from innovating and serving customers. At the end of his section, he unveiled Surface, not the keyboard or the kickstand or any specs. It was the CEO photo op lending all his gravitas to the event.

    The stage was small but had been designed with a runway making an intimacy possible, which would become a hallmark of Surface events. The risers were beveled at 22 degrees, the same angle as the kickstand.

    As I looked on, I stepped through the demo in my head for the 1,000th time. I had my Surface RT in hand with the screen polished as bright as a signal mirror, using a trick the team had of buffing with hand sanitizer and a microfiber cloth. As Steve was finishing, I touched the power button like I was supposed to and noticed the slightest little screen twitch. I brushed it off.

    Rookie f*****g move.

    I walked out on stage, as excited as I could ever possibly have imagined. About 10,000 people had worked on what was about to be shown for the very first time. Microsoft had never done such a surprise announcement before. We had never spoken to a room of people without pre-briefing them. Everyone was on edge.

    Proudly, after making my way down the narrow runway, I announced Surface, a stage for Windows 8.

    Two minutes and thirty-one seconds into the presentation, I was showing the new Internet Explorer. As I touched the screen for the audience, nothing was moving.

    I muttered “Oops” and skipped backwards up the runway to the demo table and grabbed the backup. What felt like ten minutes was nine seconds, the longest nine seconds of any demo I had ever done. Who crashes the hardware during a reveal we practiced two dozen times flawlessly? Me. The guy on stage who should have swapped out the machine when the screen twitched. I could physically feel Mike, Panos, and the demo team backstage cringing.

    Then the teleprompter went out too.

    I kept going. In a bit of stage craft, I introduced the magnetic connector and a nice cover with smooth fabric on the outside, jokingly referred to as “Fine Northwest Polartec®” (it was just nylon), an innovation as well. I attached the cover with a memorable click—that great work of the mechanical team.

    I posited to the audience: Why shouldn’t we do something more than cover the screen? To oohs and aahs, some gasps, and then applause, I extended the cover to reveal the full multitouch keyboard. At least that worked I thought.

    The main reveals were complete. Phew. I made it through the crash. Backstage the team was despondent but also worried about how I would react. I was so disappointed in my failure to switch devices. That it happened at all still nags at me a decade later.

    MikeAng then came on stage and flawlessly demonstrated the Intel-powered GTX, newly named Surface Pro, including writing on screen with a pen. I know that made a certain person back in Redmond very happy. Mike then introduced Panos. Panos detailed the team culture and design process: RalfG, StevieB, Pete Kyriacou (PeteK), Brett Ostrum (BrettO), and so many featured in a video Panos showed before a deep dive on all the technologies in the product.

    We concluded and then opened up an area offstage where the press could experience the device. Despite my troubles on stage, letting them see the device in action proved the right call. It wasn’t a free for all, but it was something.

    As the press and guests filed to the open area, one of the best in the business stopped me. By all accounts hyperventilating, Joanna Stern then of ABC News had just one question, “How did you keep it a secret?” I like to think we got more done than that, but that was a remarkable accomplishment for Microsoft. Some inside the company would be critical of the secrecy, specifically critical of me, but especially for Surface the nature of the business required it. The test hardware running essentially the same componentry was available to all participating teams.

    With the product in hand, I started to use it more regularly. Just after the show I made it a point to use it on my regular places around headquarters. I had a photo snapped while eating lunch at Kidd Valley, my favorite protein and low-glycemic index spot right by campus.

    One of the most useful traits of Surface running Windows RT was how even if something failed it took only a few seconds to restart the device and get back to where I left off, especially while under development. That made it more worry-free than a Windows laptop, as we expected and designed for. It was also so fast to resume from standby, and the Metro Mail app connected to Exchange was constantly fetching email even when in standby.

    When AnandTech’s Anand Lal Shimpi reviewed Surface at availability in an extraordinarily detailed and in-depth review of almost 9,000 words, the conclusion was just what we hoped to achieve.

    I don't believe Surface is perfect, but it's a platform I can believe in. What I'm most excited about is to see what happens after a second or third rev of the design. [. . .]

    If you're ok being an early adopter, and ok dealing with the fact that mobile devices are still being significantly revved every year, Surface is worth your consideration. If you've wanted a tablet that could begin to bridge the content consumption and productivity divide, Surface is it.

    My first day of work I could not even get a PC the size and sound of a small refrigerator to connect to a network and run reliably. How far we’d come. I held in my hand a PC that could survive a 10-foot fall, rebooted in seconds, running the best software Microsoft ever created, weighed under 2 pounds travel weight, and easily connected everywhere. I could not be prouder of the work the team had done.

    Over the summer as the product wound down we began planning for what would come next. This almost sounds crazy with all that was going on. We had created a machinery for delivering products. It was incredible to be part of. To think that just five years earlier the team, mostly the same team, had just struggled for six years to create one release and we created two releases on time, with none of the stress and strain relatively speaking, and with very high quality. We needed to keep moving.

    Windows 8 was a 1.0 release. Perhaps by many accounts it was a 0.9 release, at least for the reimagined part. The Win32 and Intel part of the release was incredibly solid. It was indeed Windows 7 plus more. The team knew we had set out on a multi-year, even decade journey. I knew we would need to operate completely differently.

    There would not be a Windows 9 next. In a very short time over the summer, we organized and began planning on “Windows Blue” which is a name I picked to signify it was not another big release. We had a lot of finishing to do. Microsoft never got anything right in the first version, or even the second. We needed to move the team to think about acting with precision in a timely manner so we could, in Microspeak, crank out a release every year.

    We created a vision and a schedule for a release to finish in one year. The entire goal of the release was to refine everything from RTM. Refine the platform. Refine the user experience. Refine the core built-in applications and service connections such as SkyDrive and Outlook.com. Decide on a long-term Surface roadmap. The list was very long. Above all we’d fix the things that were so obviously not complete, such as requiring a trip to the desktop to use files or change a PC setting. And yes, we would pay close attention to the reviews as they came out. Though I can say even today, we would not have planned on a full retreat, a redo, or a reset.

    Microsoft takes three versions to get things right. We grew up with that reality. The world was moving ahead with or without us. We were determined.

    On August 2, 2012, Windows 8 went to manufacturing. We finished 120 days late from our original schedule. The Surface assembly line went into operation. We had a fantastic event on campus. Once again the administrative assistant team outdid themselves.

    August was always a busy month at Microsoft. The field sales organization held their annual rally/strategic coordination meeting with almost all of the global staff. Microsoft usually held a company meeting around that time. There was also a Wall Street event or shareholder meeting. These events are the rhythm of a large corporation.

    August/September 2012 was also when the company came together to get behind Windows 8. For all the concerns and issues raised as the product progressed, there was an electricity and a sense of excitement. People had seen the doldrums for the past decade. People had seen PC numbers declining. People had seen Microsoft lose in mobile, fall behind Linux, and not yet have a role in cloud computing. While the financials the company was putting up were fantastic by any measure, the buzz was gone. More than a buzz, the company wasn’t a leader. The company wasn’t, for lack of a better word, relevant. We all believed, and genuinely hoped, Windows 8 would offer a chance to be the industry leader again, not just the biggest company, but the most relevant company.

    At both the global sales meeting and the company meeting we presented Windows 8 and Surface. The global sales meeting told the story of Windows 8 and why it could win. The field of course were worried about the missing Start menu. I shared some usage data comparing how the Start screen compared to the Start menu. Rather than trying to jam all the functionality into a tiny corner of the screen we showed how people saw the information from Live tiles and made full use of the screen, all maintaining familiar positional memory and muscle memory. It was just one point but the kind of message you send to the field that says, “we heard” and here’s what we consider to be the case. It was not spin. It was what we believed.

    What I remember though was not apologizing for the Start screen, but the absolutely incredible and over-the-top reception just when I walked out on stage. I have never felt so much gratitude from 30,000 people all at once. I got to stand there, head bowing, absorbing that gratitude for the team back in Redmond. I stood on stage to applause for nearly a full minute. Humbling doesn’t begin to describe the feeling. I couldn’t say it enough at the time, but I can once again say thank you to the global field team for that welcome and appreciation on behalf of the team. It was so genuine, so unexpected, and just so wonderful.

    Just a week later we gathered at the Key Arena in Seattle for the company meeting. While much of the message was the same as MGX, this was headquarters and that means it was mostly the product development groups and company staff. They wanted to see the long version of everything. We did everything to deliver. We had the biggest hardware lineup of Windows 8 PCs MikeAng had ever assembled. PanosP came out and “debuted” Surface once again. Tami Reller walked everyone through all we had done to bring the products to market. We had an amazing and relaxed time. Once again, the reception was just over the top. It was amazing.

    We had one more surprise even after all of this. One of the many great joys of being part of creating Surface was the chance to work with a creative team in Hollywood who would bring Surface to television and web advertising. The team was headed by movie producer Andrew Panay, brother of Panos. He assembled a team of creative people to write, visualize, and direct the television spots, the sound stage for the launch event, a worldwide street art campaign, and a plethora of other materials. Andrew was already a seasoned film producer (Wedding Crashers), and their early ideas and pitch made it easy to decide to collaborate with the team on a whole new approach. Andrew, with his keen eye for talent, brought in the director Jon M. Chu to create the marquee television debut for Surface, called Movement. The commercial featured the iconic “Click-In” sound, the hinge action, colorful touch covers, and a troupe of talented au courant dancers. Astute viewers and frequent travelers would later recognize some of the dancers from another well-known spot from Chu, the Virgin Atlantic safety video. Chu would later direct G.I. Joe: Retaliation, Crazy Rich Asians, and In the Heights among his dozens of credits.

    We debuted the spot not only for all of Microsoft HQ, but for SteveB as well. Steve did not like surprises, but I had a strong feeling he would love this commercial. It was high energy like Steve, and it sold hard on the attributes of Surface. It was cool. It was hip. It was, well, everything Microsoft was not.

    Steve, MikeAng, Tami Reller, Panos, JulieLar, and some others stood by the green room looking on to the full venue of 20,000 or so employees as the lights went down and the commercial debuted. The arena and SteveB went bonkers. Score another awkward high-five moment for me and everyone within a 20-foot radius. The commercial was a moment. The campaign went on to win awards in the advertising world as well, something that Microsoft had not accomplished in ages.

    Both events really demonstrated that Microsoft’s headquarters personnel were excited, optimistic, supportive, and a great many were direct contributors. Certainly, we had not experienced anything like that in the past decade of Microsoft.

    For the October launch, the Microsoft Store team secured New York City’s Times Square, for a Surface and Windows 8 block party celebrating the opening of the new flagship Microsoft Store. The launch event itself was in an enormous hall downtown. It was filled to capacity. For me personally it would be mostly anticlimactic like all launch events usually were.

    Walking around Times Square that night among thousands of people at our own party was crazy. But my mind was not there. I was thinking about what was going to appear with reviews and more. The plan was to only sell Surface RT in Microsoft stores. I was thinking about how we would sell all the Surface devices we built. Our plan was to sell exclusively through only the few Microsoft Stores. I did the math on that, and it was scary. I worried about the aggressive pricing of $499, or $599 with Touch Cover. I was thinking about the Start screen. The desktop. The App Store. The Windows team. Windows Blue. All those people who had done so much, given so much, done such a fantastic job.

    I loved this company. I was also exhausted.

    About a year before that big launch, I wrote an email that I would deliver on my last day of work at Microsoft. Unsent then, I knew it would go out in the not-far-off future. I stored it in my email drafts folder and held onto a printed copy, undated and unsigned, in an envelope in my Windows logo Timbuk2 bag along with my Surface tablet and my 16 pages of printouts of the vision, staffing, budgets, product reviews, compete info, and the dev schedule that I always had with me. I told nobody about that letter.

    I wrote it after a big debate at a cross-division executive meeting over the degree to which Windows 8 would be designed to sell other parts of Microsoft—how many of those colorful Metro tiles would be pushed to customers, not because each was a good product choice but because we wanted to sell them something else. I was being excessively principled, as I was known to be, believing in the primacy of the customer experience. I believed others were being too flippant with our role as a platform provider and a significant reason why we could make a better PC for customers. “It is not the user’s PC, it is our PC,” one exec opined. “Sure, it is crapware, but at least it is our crapware” and “We fought the DOJ so we could own the screen” others chimed in.

    Product cycles are emotional roller coasters. Anyone who has ever gone through one knows there are dozens of times when it just gets to be too much. The more seasoned one becomes the more one knows never to do anything precipitously. I knew it was time for a change. I intended to get through the launch. The plans were in place.

    I was spent.

    Equally clear to me was that the company was spent too. Microsoft needed to do things differently, without me.

    On to 108. Epilogue: The End of the PC Revolution



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • This section was the most difficult to write. At least people look back favorably on Clippy. The Windows 8 Start screen lacks any kitsch or sentimental value. It was the wrong design for the product at the wrong time and ultimately my responsibility. This is not the story of the design. There are better people to write about the specifics. This is not a story of ignoring feedback or failing to heed the market, but a story of just what happens when you’re out of degrees of freedom. This is the story of the constraints and the rationale for how we managed a situation that we saw as a quagmire. The good thing about the Start screen was everyone had an opinion. The bad thing was most of those opinions were not favorable.

    Subscribers, only two more scheduled posts remaining.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • The previous section detailed the release of the Windows 8 platform, WinRT, for building Metro-style apps. In the reimagining of Windows from the chipset to the experience, we’ve covered all the major efforts. In this section, we will describe the latest in PCs that will contribute to Windows 8, which Intel called Ultrabook™ PCs We will also introduce the Windows Store where developers could distribute apps. The really big news will be the Consumer Preview or beta test for Windows 8 where millions will experience the product for the first time. It might surprise readers, just as with the Developer Preview, that the reaction to the product across many audiences was quite positive. Just how positive? And what in the world could the professional press and reviewers actually liked? And what did Apple’s Tim Cook have to say about all this?

    Back to 104. //build It and They Will Come (Hopefully)

    Following the //build conference we were feeling quite good. Not to belabor the point, but I recognize how challenging it is to take such feelings at face value given where the product ended up. In writing this and helping people experience the steps we went through at the time in sequence, my hope is that what comes to light is that we were not bonkers and in fact much of the industry was excited by Windows 8 as it emerged. Of course, there were skeptics and doubters, even haters, but as veterans of dozens of major products we’d seen this before and the volume for Windows 8 was not disproportionate. If anything, the excitement and optimism were higher. So where did things take a decidedly different direction? It was when after product emerged from the Developer Preview and a series of events including the widely distributed Consumer Preview, or beta, when millions of people would experience the product. The leadup to the Consumer Preview in March 2012 included some important steps in the process as well.

    New Ultrabooks

    On the heels of the //build conference in September 2011 Intel began kicking off an effort to reenergize the PC industry with a response to Apple, finally. Intel developed a series of specifications, financial incentives in the form of marketing and pricing actions, as well as supply chain activation to deliver on a new class of laptop. Intel called these Ultrabooks. We called them a blessing.

    Intel was best positioned to drive this type of advance. It was always difficult for Microsoft simply providing the operating system to dramatically alter the hardware platform, even though many thought by virtue of building Windows we held significant sway. We certainly had influence, but ultimately Windows was a wide-open platform which meant hardware to support any scenario was under PC maker control. The few times we had tried to tightly control hardware specifications, such as with Tablet PC and Media Center PC, did not go well at all. Worse, such controls angered not just PC makers but our fans as well who always wanted to build PCs on their own and experiment with hardware components.

    Unlike Microsoft, Intel had a unique ability to influence hardware specifications and their influence increased over time compared to Microsoft’s which waned over the years. Intel rallied the industry around Netbooks. While that was a failure, it provided a playbook that Intel could later follow. Before the Netbook, Intel almost single-handedly drove a consistent level of support for Wi-Fi with the Centrino line of chips, which bought both lower-power consumption and Wi-Fi to the standard business laptop. In these cases, and many others such as USB, SATA storage, integrated graphics, and more, Intel took on a broader role in determining components and building software drivers for Windows (and Linux) while making it easier for OEMs to adopt a complete platform.

    The efforts were not pure altruism. Intel would use these complete component platforms to steer OEMs to specific price points for chips as well as unit volume commitments. With those in hand, Intel could broadly advertise the platform using their massive Intel Inside advertising budget. These financial incentives were eagerly embraced by OEMs and a key part of their margin. Intel maximized its own margins as well by careful choice of CPUs in these platforms and enabling OEMs to upsell to even higher margin chips as appropriate.

    This dynamic is why competing with the new Apple MacBook Air starting in 2008 followed by subsequent models and then Intel-based MacBook Pros proved elusive. Conspiracy theorists would believe that Intel was slow-rolling competitive PCs just to keep Apple and Steve Jobs happy. I never saw any indication of such a dynamic. Rather, it just seemed like PC makers were basically fat and happy in their share battle with each other. They had little worry about the 3-5% of share Apple had especially because they viewed Apple laptops and their customers as high-end, expensive, and premium. The PC business was all about good price and great volume. Being a pound heavier, an inch thicker, and plastic made little difference. As Apple share among influential customers, especially in the US, increased, the urgency from Intel and PC makers changed.

    At the 2011 Intel Developer Forum (IDF) in Taiwan and in parallel with the //build Conference, Intel unveiled a new concept PC, the Ultrabook™. An Ultrabook wasn’t an actual PC from Intel, but a series of specifications or requirements for a PC to inherit the Ultrabook label, and thus the CPU pricing and broad co-advertising that came with it.

    Unlike Netbooks and Centrino, Ultrabook specifications were rather detailed and covered a broad set of criteria beyond even the components Intel provided. The tagline Intel chose was “Thin, Responsive, and Secure” which would be used quite broadly. Among the requirements to be part of this program, new PCs had to include:

    * Battery. A good deal of the platform effort was a new type of battery that was not yet used broadly on Windows PCs. Ultrabook PCs required non-removable Li-Poly batteries of 36-41WHr designed to fit around components and a minimum of 5 hours of runtime.

    * Storage and Responsiveness. While not required precisely, there was a strong recommendation to use solid state disk drives, SSDs, in Ultrabook PCs which would significantly improve performance. SSDs also made it possible to strongly recommend a wake from standby time of just 7 seconds, which for Windows PCs would be excellent at the time.

    * Chassis Design. For the first time, Intel specified what amounted to innovative chassis design. For laptops with 14” or larger screens, the chassis needed to be under 21mm and for smaller screens 18mm.

    * Screen. The Ultrabook specification included guidelines and requirements covering display selection as well, including detailed values for thickness, bezel size, viewing angles, pixel density, and power requirements. At IDF, Intel showed off displays from a number of display makers who were ready and able to supply screens.

    * Keyboards. Even keyboards, far from Intel’s expertise, received attention. Back-lights, spill resistant layers, key-travel and key shape were all specified in Ultrabook design. This was a significant departure for Intel and the requirements created the need for keyboard redesign for all laptops.

    * Sensors and devices. Intel even included recommendations for devices usually seen far off the motherboard including: 720p webcam, accelerometer, GPS, ambient light sensor and more.

    Intel really geared up the supply chain. This was crucially important during the huge ramp up happening with mobile phones where many suppliers were thinking of moving on from PCs. As it would turn out, Ultrabooks were the last gasp for innovative PCs.

    Ultrabook laptops would turn out to be the ultimate devices for the road warrior running Office. These even led to the standardization of HDMI connectors in conference rooms after decades of VGA/RGB connectors. Windows 7 had introduced the command Window+P to make it easy to switch thus ending the need for degaussing and rebooting PCs to project…mostly. The stellar work at the device and OS kernel level to reduce power consumption, improve boot time, and even the unique features for SSD storage all contributed greatly to a fantastic experience for this new form factor.

    Ultrabooks brought Windows hardware to the 21st century and were far more competitive with Apple laptops than we might have expected after waiting so long. In fact, Ultrabook PCs were downright cheap compared to Apple products. While most would retail for the magic number of $1,495 many could be had for the other magic number of $999. This compared to nearly twice as much for the similarly configured Apple laptops. All in all, this was a huge win for the Windows PC. Every PC laptop today owes its existence to this excellent work by Intel and the supply chain. A small benefit for tech enthusiasts and IT administrators was that the wave of Ultrabook standardization also made it possible to install Windows without requiring additional drivers to be downloaded from PC maker sites.

    Ultrabook PCs rapidly diffused across the ecosystem from the board room to executive teams to consultants and eventually to students. I remember a 2011 recruiting visit to MIT and Harvard and while I saw a lot (perhaps majority) of MacBooks, the PCs I saw were all newly purchased Ultrabook PCs with their sleek, un-PC-like aluminum cases.

    Many believed Ultrabooks would put a dent in iPad momentum. Once again, it is worth a reminder that Apple’s iPad was absolutely top of mind for the industry. The iPad was the holiday gift for 2011. Apple sold over 32 million iPads in 2011 and the tablet redefined the baseline requirements for a road warrior productivity computing. Apple, hoping to sell every Apple customer on an iPhone, MacBook, and a new iPad remained relentless in the distinct use case for iPad while also continuing to tout the iPad as the future of computing.

    It was this spike in demand for iPads that drove the difficult conversations with the Microsoft Office team about the role of Office on Windows 8, specifically Windows 8 on ARM processors, including the ARM device we were developing in-house that was quite secret at the time.

    Old Office

    We always knew going into Windows 8 planning that developing a new platform and new API meant also enlisting the support of the Office team. This was not some innovative thinking on our part, but literally the Microsoft playbook from the founding of the company.

    A platform cold start requires a huge leap of faith from the consumers of the platform. The best way to seed interest in the platform is to have flagship applications that can be demonstrated to other developers to get the flywheel going. Cool apps on a platform attract more app developers which lead to cooler apps. Sometimes this is even labeled “killer application” though I think that is a bit dramatic since it doesn’t kill but brings life to the platform. Having Lotus 1-2-3 on MS-DOS, Microsoft Excel, Aldus PageMaker, and also Microsoft Word on Macintosh cemented that platform. Excel and Word were crucial to validating Windows and even to Microsoft’s ability to complete building Windows for significant applications. Ultimately Office on Windows 3.0 proved to be a tipping point for Windows. In a turn of fate, Windows also proved to be critical to the success of Office as a productivity suite. Microsoft not only came to define this virtuous platform cycle but benefitted itself with Office.

    Platform shifts offer a unique moment when all bets are off, and the new leaders can emerge. It is why Bill Gates was so tuned in to creating and betting on platform shifts and why Silicon Valley always seems to be seeking out these moments. Recall from the earlier section 011. Strategy for the 90s: Windows, that platform shifts can be so dramatic that even within a company people decide to quit over them. My first manager and programming legend Doug Klunder (DougK) famously left Microsoft in the early 1980s over the strategy change to build a Macintosh spreadsheet rather than ship the MS-DOS spreadsheet that he had built and was certain would beat Lotus 1-2-3. That spreadsheet and Doug’s recalc invention (minimal recalc) formed a core part of Microsoft going forward and Doug later returned.

    As discussed previously, many describe nefarious means to the rise of Office on Windows compared to Lotus 1-2-3 and WordPerfect among other leading MS-DOS productivity tools. Within the leaders of MS-DOS applications, none prioritized Windows above all other platforms. Instead choosing to nurture their existing MS-DOS successes and character-mode user experiences. Microsoft uniquely bet on Macintosh and then Windows, perhaps because it lacked the mega-successful MS-DOS applications business the others possessed. Nevertheless, Microsoft bet and bet big and bet early. The rest is history.

    Little did I and the other leaders on Windows 8, all of whom had previously worked on and led Office, realize how the tables would turn. Here we were at the start of Windows 8 knowing that we would be introducing a new platform and fully aware of Microsoft’s own playbook. We knew it would take convincing but at least for me I failed to fully internalize just how difficult it would be for Microsoft to collectively make a bet on a new platform.

    At the Windows 8 Vision Meeting, I sat next to Steve Ballmer and the guests from across the company, including the leaders of Office. We showed Office running in the prototype sketches just as we showed Office running in January 2011 on ARM at the CES show and then later at //build. The Office everyone knew and loved running in the desktop. It was right then at the Vision meeting that SteveB leaned over to ask me about Office for tablets and the new platform.

    I said, definitively, that we had a huge chance to build a new kind of Office application for this new world of mobility, tablets, touch, and more. It wouldn’t be like the current Office, and it wouldn’t be like browser Office—the project started back when I was in Office. We had a chance to define the new user interface paradigm for this new style of productivity, much as how Office came to define the user interface paradigm for mouse and keyboard.

    We had many conversations with the Office team throughout building Windows 8. Our goal was simple—convince the team to build a new kind of app for the new Windows 8 platform. We did not want to port the existing apps to Windows 8 any more than we wanted any ISV to do that. Microsoft’s own experience with the first version of Word for Windows, a legendary mishap caused by trying to port code from one platform to another, was enough of a lesson to create institutional knowledge of the risks of doing that. Even the early releases of PowerPoint on Windows suffered from the difficulties of trying to port from Macintosh. Many might remember the difficulties Microsoft had with Word 6.0 for Macintosh which did not fully embrace the look and feel of Mac and disappointed.

    The world changed a great deal with iPad and web browsers. The historical strengths of Office focused on fine grained document control for printing gave way to multi-player (a recent nomenclature for multi-user) collaboration, sharing, and online consumption, and considerably less demand for formatting. There would be ample opportunity for a new type of application. Perhaps if we collectively developed that, we thought, we could avoid the pressure to provide Office per se for Windows 8, which we knew would be a massive undertaking for a new platform, even futile.

    Office was eager to experiment with OneNote, which was supporting mobile, browser, and more and was widely praised for an innovative experience in an important scenario. Doing something more significant was proving a challenge and a bit frustrating for us. In order to understand the situation, it is important to recognize the situation Office found itself in in 2010 and what really drove strategic choices.

    Historically, the biggest factor in weighing choices for Office has been the maintenance of the value of the overall bundle (Word, Excel, PowerPoint, Outlook, and the Access database) and with that the pricing. Key to maintaining that value was compatibility with “full Office” meaning the complete feature-set of desktop Office. In other words, when sales and marketing were confronted with either a lower-priced or less-featured variant of Office the response was to focus on backward compatibility for dealing with old files and email attachments from anywhere, on the familiarity with the user interface, aka the Ribbon, and on the value of the bundle. By 2010, the Office brand came to represent the Ribbon user interface, the full Office feature set, and support for documents and custom Visual Basic solutions that existed across thousands of enterprise customers.

    Office continued to face somewhat acute challenges to the business from Google Apps (previously Google Apps for Your Domain, often colloquially called Google Docs, more recently G Suite, today known as Google Workspace.) Google pushed Google Apps relentlessly through the incredibly successful Gmail launch, something we would do with Windows Live and the Office browser-based applications. Google Apps was making real progress in the academic market, where the product was usually free. Small business was also adopting Google Apps owing to the push via Gmail and the easy use of the apps after registering an email domain with Google. Most importantly, enterprise customers under budget pressures following the Global Financial Crisis also began to consider Google Apps. Use of a difficult economy to put price pressure on Microsoft was something I’d experienced through every downturn since the Gulf War. Customers not only had the option of Google Apps, but they could also maintain full compatibility with Office by simply sticking with the Office they already owned and not upgrading, putting the Enterprise Agreement renewal in jeopardy. Losing even a single deal had the appearance of a “share shift” and a series of 3 deals would likely be followed by industry analyst quotes or even press releases from Google, and suddenly there was a trend.

    The presence of this competitive threat made offering anything at all new seem risky. A new product would call into question the value of “full Office” and certainly anything priced at app store level pricing would draw attention to the nearly $200 per year Office was receiving in enterprise agreements, which today could be over $40 per month for Office 365. With a few thousand Enterprise Agreement customers, taking on this kind of product challenge was a level of risk that the Office team was not going to be comfortable with all that easily.

    Office for Mac, long mostly out of sight and out of mind, had become a lot more interesting of late. Back when I was in Office and following the release of Mac Office 4.x and Windows Office 95, the legendary deal between Steve Jobs and Bill Gates was struck. Most recall this deal through the lens of antitrust and Microsoft throwing Apple a life preserver just as Steve Jobs returned to a company on the brink of bankruptcy. Microsoft indeed paid apple $150 million in exchange for patent rights which would ensure the end of litigation between the companies. In exchange, Apple also received a commitment to Macintosh and that Office would continue to support the platform. It was then we (in Office) created the Macintosh Business Unit (MacBU) staffed at 100 engineers to start. The goal for us was to stop kidding ourselves about doing a great job of cross-platform support and let the Mac team focus on Mac and the Windows team focus on Windows, with judicious code sharing as the MacBU, staffed with Office engineers, deemed necessary. From that time forward, August 1997, the Mac team went merrily on its way as did Windows.

    Each of the subsequent leaders of MacBU were deeply connected to Apple, and discretely so. The business relationship required that. Most of Mac Office was sold not through Microsoft’s typical enterprise accounts but direct to individual customers. As Apple’s new retail stores scaled, the sales of Mac Office through that channel quickly became the primary means for Apple customers to acquire Office. MacBU found itself somewhat beholden to Apple for distribution, price support, and even launch PR. In exchange, MacBU dedicated its efforts to being a fantastic client of the Mac OS as it evolved. Such support had become increasingly difficult as Windows and Mac diverged and then as Mac became a diminishingly small part of the overall business, unlike the 1980s when it was more than half the business. The deal created a win-win for Apple and Microsoft, not to mention Apple customers.

    Part of the relationship between MacBU and Apple included early access to Apple technologies under development, exactly as Microsoft had done for Windows with leading vendors. By offering this, Apple hoped to garner MacBU support for the latest platform technologies. The transitions Apple made to the new OS X and then later to Intel processors all but required Office to be present from the first days of availability. Apple knew this. MacBU knew this. Our industry knew this. It was the way platforms worked.

    As part of this work, MacBU also began to migrate the Office code base from the older Macintosh operating system APIs to the new OS X APIs called Cocoa. As previously mentioned, Apple had a different way of evolving the OS compared to Microsoft. Apple not only strongly encouraged ISVs to move to new APIs, but it also obsoleted APIs thus requiring ISVs to move. Sometimes with their big code bases, the important ISVs would slow roll these moves. Adobe was (and remains) legendary for its slow pace of adoption. MacBU tended to be nimble, due at least in part to the ever-present connection in the Apple retail stores. Plus using the latest was a pretty cool thing to do in the Mac community.

    As Apple’s new tablet came into existence, MacBU had to decide what to do to support it long before the general public or anyone else at Microsoft was aware of the project. To its credit, MacBU guarded Apple trade secrets just as Apple would have hoped. No one outside MacBU was aware of this project. Apple lobbied the team to make Office for iPad. Part of the promise was that the new iPad APIs were reasonably close to the OS X APIs and the work would not be crazy, taking advantage of the migration to Cocoa.

    Of note, this was decidedly not an approach taken by Windows 8. Apple had already spent a generation modernizing the APIs for the iPhone and pushing ISVs to adopt more modern approaches for Mac. In other words, they were way ahead of Microsoft.

    It was not quite so simple though. Apple threw a few wrinkles in the mix. First, Apple had put a good deal of effort into its own suite of Mac apps, called iWork, which were direct competitors to Office. This proved frustrating for Microsoft and also provided leverage for Apple. If MacBU didn’t follow an Apple strategy, Apple would for its own iWork.

    Second, iWork was cheap. At the early days, iWork cost $79, substantially less than Office which usually sold for $150 but up to $280 with Mac Outlook. MacBU was no happier about a low-priced competitor than Windows Office was, but this was especially problematic because of the first-party nature of this competition. Whatever MacBU offered for the future tablet was going to need to be price competitive or face a real uphill battle.

    Related to this, the iPhone app store already demonstrated that pricing for apps would be much lower and much less favorable (to ISVs) than selling boxes of software at retail. Few apps sold for more than $9.99 and all had a perpetual license for as many devices as someone owned. That was not close to the level of pricing (and margin) to support the MacBU R&D commitment, especially without significantly more volume.

    Third, the iPhone OS and by extension this future tablet would not initially support “full Office” for all the same reasons that Windows 8 would not be able to. The security model, the user experience, and even the setup and distribution tools all were substantially less functional or at least different enough. Simply porting Office to the tablet was no more possible for MacBU than it was for Windows Office supporting Windows 8. Maintaining the complete meaning of the Office brand—and the name—along with the price point would be impossible.

    The concerns over pricing, cannibalization, and the Office brand on the new Apple tablet mirrored those of the Windows Office team. Still, unlike the Windows team, the Mac team began investing in the new tablet and by the time we were deep in discussions about Office and Windows 8 there was substantial progress on Apple’s new tablet OS. I did not know this at the time, but once the iPad was announced I was made aware.

    This created an awkward situation. If there was indeed a new, or a port, of Office for iPad but not an equivalent product for Windows 8, Microsoft would look confused at best or dumb at worst. MacBU was caught between its own business needs and the strategic needs of Apple while the Windows Office team was caught between its business needs and the strategic needs of Microsoft overall.

    Meanwhile the Windows Office team was doing great work retargeting Intel-based Office to run on ARM processors. This was not trivial or zero work but was super well-understood. Much like Windows NT, Office had for a long time been insulated from the specifics of chipsets and instruction sets. So long as the compiler could generate the right code and the Win32 APIs were available the effort was straight-forward. Going as far back as the January 2011 CES meeting, Office had native builds of Word, Excel, and PowerPoint for ARM using Win32 and these were important to our demonstrations of SoC support.

    Our plans, however, did not include making the Win32 API available to third parties on ARM. As described previously, the Win32 API for apps presented many challenges on ARM including power management, security including viruses and malware, and full integration with the Metro user experience such as contracts for sharing information. Outlook was particularly problematic when it came to its power usage due to all the background processing. After a decade of fine tuning, the power profile for Word, Excel, and PowerPoint were excellent, far better than most all Windows software.

    We intended to provide the desktop for working with files in Windows Explorer along with the traditional control panel for settings. Admittedly, this would prove to be the source of more confusion than convenience. Still the situation was not unlike that of the MS-DOS command box on Windows to some degree, especially on the non-Intel versions of Windows NT.

    The bigger problem would be how an ARM device would require a keyboard and mouse to use desktop versions of Win32 Office apps as well as the Windows desktop itself. While all ARM devices would support a keyboard and mouse, not all would have one readily available as some would be pure tablets like the iPad. The Office team introduced some user interface redesign to better support touch, but at a fundamental level Office required a mouse and keyboard for any sort of robust usage.

    After a series of difficult conversations culminating in an executive staff discussion we arrived at an end-state where we would ship Win32 desktop versions of Word, Excel, and PowerPoint and Metro-style OneNote. It is not without irony that I reflected on the fact that OneNote exists because we (in Office, me and others) made a bet on the Tablet PC edition of Windows when the team building the OS failed to demonstrate user interface mechanisms or scenarios for how or when a pen could be used for word processing or spreadsheets. Not only did we feel we had ample user interface metaphors we also believed we articulated the type of platform changes around touch, cloud, and mobile that could be incorporated to build a new and unique productivity tool.

    When it came down to it, the Office team did not want to invest in a subset of Office, reduce the brand value proposition for the apps because they would lack support for Visual Basic and other enterprise features, or take on the risk of developing something entirely new. The biggest challenge for Office on iPad would prove to be the same as many other existing ISVs faced, the iPad app pricing and licensing model. Office was early moving customers to Office 365 and on Mac the business was still the $150 perpetual license. There was no way to charge that much for apps on iPad at the time, especially with Apple apps priced at $9.95. The App Store rules would prove complicated for Microsoft as well, who would not want to cede subscription revenue to Apple. I’m sure there were many discussions with Apple that I was not part of.

    Ultimately, the Microsoft Office team had the same choice to make for the iPad as it did for Windows 8 but chose one way for the iPad and the other way for Windows 8. It was the same type of platform choice faced 30 years earlier when the Microsoft Apps team faced building a spreadsheet for MS-DOS or for Macintosh and then Windows.

    I obviously understood all these reasons deeply from my own first party experience. At the same time, the company’s choice felt to me like the most conservative approach we could take at a time when we needed to be taking on more risk. The company was going through a difficult period and was getting beaten up for the risks it was taking by investing across search, gaming, web, and because of the inability to get ahead on smartphones and browsing. Reducing risk to Office at a time like this is what can happen when a company sees defending the existing business as the top priority over what could viewed as innovation or new opportunity. Platform shifts are extremely difficult for the existing leaders, and Office was a leader on both Mac and Windows.

    On ARM devices the lack of Metro-style Office (also called Office for WinRT) would prove even more confusing to the market because it would appear as though we were holding back key platform capabilities by not making it so simple to port applications to the desktop. We knew there would be little interest in providing native ARM applications from third parties who would have to go through all the effort to port and support when there were plenty of Intel-based systems out there for their customers to use. Most desktop applications for sale that were under active development were also the kind of tools that required high-end PCs and were not optimized even for laptops, applications such as those from Adobe, AutoDesk, Dassault, and the like.

    Apple provided iWork for the iPad from the initial launch of the iPad in 2010. They used the iPad as an opportunity to purpose-build tablet apps. By 2017, iWork was free across all platforms. Apple had nothing to defend and only benefitted from the effort to support the platform, so I suspect this was all relatively easy for them to do.

    During that executive staff meeting we had the difficult discussion about the existing iPad apps for Office as well. In this case, the same concerns expressed about Office for WinRT existed for Office for iPad. The only difference was that Office for iPad was well on its to being done by late 2011. The team was concerned about what to call the apps, the lack of Outlook for iPad as they had only built Word, Excel, and PowerPoint, and the feature differences between the apps on iPad and Windows. Outlook would follow many years later, based on an acquisition of a Silicon Valley startup. Office settled on a business model where the apps were free for existing 365 subscribers or a $99 in-app purchase. Non-subscribers could use the apps to view, but not edit, documents. It would be messy and frustrating for customers who were searching the Apple App Store for “Word” and “Excel” in huge volumes and finding nothing from Microsoft.

    Collectively we agreed that it would be embarrassing for Microsoft to have tablet apps first on iPad and then never on a Windows PC. The subset of Office apps would be released for iPad in March 2014. In 2021, Microsoft released an all-in-one app called Office, which supported each of the apps in a single download as well as Microsoft’s cloud storage, OneDrive.

    The world was waiting for validation from Office for the new Windows 8 platform and would only see that through OneNote, which was a nice addition but not the killer app that platforms require. While I am certain there is no one aspect to Windows 8 that contributed more than any other to the market reception, the lack of Office designed for Windows 8 was a top issue.

    Big Windows 8 Consumer Preview

    The march of external Windows 8 milestones continued.

    After the //build conference that was so energizing we held an event highlighting the app store that would be part of Windows 8. We held the event in San Francisco and ran through the economics of the store and the opportunity for developers. Antoine Leblond (Antoine) led the entire event as he’d also led the team building the store from scratch.

    The Windows Store was another part of the whole of Windows 8 that went from a blank slate to complete offering in the span of the release. By the end of 2011, just months after the //build conference we already had the store up and running and soon developers were able to exercise the app submission and approval process. As with the WinRT API we had the process and tooling well-documented. There were no major hiccups, and the store did what we needed it to do and did so gracefully.

    From the time of the Store event through the broad beta test, the marketing and evangelism teams worked tirelessly to onboard developers of all sizes and get their apps into the store. Going from 17 sample apps from interns and the apps built by the Windows team (Mail, Photos, Calendar, Contacts, and more) to hundreds of apps available just after beta including many first-tier consumer apps was a huge accomplishment.

    The final major event for all of Windows 8 was to release the broad beta test. Normally we would not have a special event for a beta but given the huge change and major effort involved in releasing a new platform we chose to use the massive mobile telephony industry meeting, Mobile World Congress (MWC), in Barcelona, Spain to announce and make available the Windows 8 Consumer Preview. This would be a downloadable release available to anyone right after our press event on February 29, 2012.

    MWC was also the big event for Windows Phone and usually where there were big announcements. The 2012 show was between phone releases, with Windows Phone 7.5 already in market and Windows Phone 8 still under wraps for an end-of-year launch coincident with Windows 8 for PCs.

    Therefore, the big news for Microsoft would be the broad beta test for Windows 8. We were excited. So far, while there were questions and certainly challenges, primarily bootstrapping the new platform, we’d seen positive results from developers, launched the Windows Store, and the product was solid. The Consumer Preview was a refinement of the Developer Preview, with no major strategic changes from the release less than six months earlier.

    The event was only for press held in a jam-packed tent of about 200 people. We were feeling pretty good about the planned presentations. While there was no strategic news, we had made over 100,000 product changes since the developer preview.

    Our primary message was the complete Windows 8 story: the operating system, apps, the Windows Store, and PCs and peripherals that would shine on Windows 8. While the world was fixated and obsessed about tablets, we continued to emphasize our message that computing devices were converging. Mobile platforms were rapidly gaining capabilities and in many ways such as power management and sensors surpassing Intel-based PCs. On the other hand, Intel-based PCs had a quality resurgence with Ultrabook specs and Windows 8 would bring touch capabilities and proper apps to Intel PCs with touch screens. I would always emphasize that we expected traditional laptops to incorporate touch screens and remind people that at some near future all their screens would have fingerprints.

    Julie Larson-Green (JulieLar) and Antoine Leblond had a series of demonstrations including the Windows 8 experience and many apps that had been submitted by developers to the store. The progress in just a few months on the Windows Store and third-party apps was a positive sign for the platform overall.

    Mike Angiulo (MikeAng) and I showed off a very exciting and broad range of Windows 8 PCs from small tablets through a giant Perceptive Pixel 82” touch screen, the kind used on CNN for election night. The emphasis was on how Windows scales across the form factors and price points all with a single operating system.

    Following the conclusion of the event a press release went out detailing the availability of the Consumer Preview for download. Within less than a day, the release was downloaded a million times. The Windows team back in Redmond was more ready than we were at the Windows 7 beta, standing by for the download traffic. Everything went smoothly.

    The press coverage of the event and importantly the reviews for the release were quite good. I know as you read this, you must be wondering if I had too much Cava in Barcelona. How could professional tech reviewers possible like Windows 8? They did. I swear. In the first couple of days, we had over 250 first looks and reviews. Nearly 70% of them scored over 110 on the PRIME score measuring overall tone, where 100 is neutral. There were 4 perfect scores at 200. Only 13% were scored 90 or less, and this is during an era where Microsoft typically averaged 90 on most product launches.

    Like the Developer Preview, across mainstream media, tech enthusiast press, and even broadcast we received an incredible volume of positive to effusive comments. I sit here today writing this and I know readers must think there is either highly selective editing on my part or some sort of conspiracy to rewrite the past. I can assure you it is neither. In writing Hardcore Software, I made it a point of researching and rereading much of the contemporary press for every product, even Windows 8 product reviews.

    We spent two weeks on the road with the press doing briefings and dozens of outlets received the Samsung tablet from //build and the near final Consumer Preview release. Unlike most previous Windows releases we did not run into technical problems, but rather I received quite a bit of private communication expressing positive impressions and almost a surprise at the combination of radical and yet comfortable the Windows 8 experience seemed to be. Once the release was available, reporters started to comment in their reviews, blogs, and even tweets:

    “Brace yourself, Windows users. Microsoft’s operating system is poised for stunning, dramatic change.” —Christina Bonnington of Wired

    “By the time the final version ships later this year, it’s clear that Windows 8 is going to be a remarkable, daring update to the venerable OS. It is a departure from nearly everything we've known Windows to be. You will love it, or hate it. I love it.” —Mat Honan of Gizmodo

    “In short, Windows 8 is elegant, dynamic and beautifully created.” —Jeremy Kaplan of FoxNews.com

    “It’s an interface that’s girding itself to compete with the iPad as much as the Mac, even though it’s not an iPad knockoff.” —Harry McCracken of TIME

    “Windows 8 is evidence that the old tech company is quite capable of bold moves and impressive innovation.” —Michael Muchmore of PC Magazine

    “One thing is knowable now: With Windows 8, Microsoft has sweated the details, embraced beauty and simplicity, and created something new and delightful. Get psyched.” —David Pogue of The New York Times

    “Don’t confuse the friendly interface with superficiality. This is Windows and can do all the things we’ve come to expect on a Windows PC.” — Wilson Rothman of MSNBC.com

    “The first beta of the next-gen operating system is eminently touchable, definitely social, and maybe just a bit sexy.” — Seth Rosenblatt of CNET

    “Overall, I’m extremely impressed with the next version of Windows – the features, the new ways of interacting with a tablet, and the potential of it all.” —Joanna Stern of ABCNews.com

    Even the first apps available in the Windows Store were receiving positive feedback. The quality of the apps available in the store was quite good. This was very early in the mobile apps era. This makes it difficult for most to think back to how relatively basic the software was at the time. Android apps were notoriously flakey and while there were huge numbers of iOS apps, most did not do very much. In evaluating the first apps from the Windows Store, Joanna Stern of ABC News (a new beat after moving from The Verge) tweeted “First impression: the apps I see for Win 8 are already 100x better than the ones for Honeycomb or ICS [Ice Cream Sandwich, the code name for the next Android release]. Very exciting.”

    Techmeme and Google News (which was the new cool place to look for headlines) were filled with coverage and positive headlines. Google News registered over 9,200 stories from the sources it crawls. This was our first Twitter-centric release and Windows 8 had almost 200,000 tweets in the first 24 hours, at peak there 20,000 per hour. It might not seem like a lot, but the Twitter-verse was relatively small in 2012, at just over 100M.

    One of our perfect score reviews, a 200 PRIME and 5 on message pickup, came from David Pogue at the New York Times. Pogue was no fan of Windows and was super well-known as an author and expert on Apple products. He wrote books on all the major systems and devices but was always a bit grouchy when it came to Windows. I had flown down to San Francisco to meet with him and pre-brief on Windows 8. He gave me an earful and I was quite concerned about the pending review. I wanted to share a bit of what the review had to say:

    It’s a huge radical rethinking of Windows — and one that’s beautiful, logical and simple. In essence, it brings the attractive, useful concept of Start-screen tiles, currently available on Windows Phone 7 phones, to laptops, desktop PC’s and tablets.

    I’ve been using Windows 8 for about a week on a prototype Samsung tablet. And I have got to tell you, I’m excited.

    For two reasons. First, because Windows 8 works fluidly and briskly on touch screens; it’s a natural fit. And second, it attains that success through a design that’s all Microsoft’s own. This business of the tiles is not at all what Apple designed for iOS, or that Google copied in Android.

    . . .

    Swipe from the right edge to open the Start menu (with Search, Share, Devices and Settings buttons). Swipe from the left to switch apps. Swipe in and back out again to open the app switcher. Swipe down to open the browser address bar.

    If you have a mouse, you can click screen corners, or hit keystrokes, to perform these same functions.

    These swipes take about one minute to learn. On a tablet, I can’t begin to tell you how much fun it is. It’s evident that Microsoft has sweated over every decision — where things are, how prominent they are, how easy they are to access. (If you have the time, watch the videos to see all of this in action.)

    . . .

    But one thing is knowable now: With Windows 8, Microsoft has sweated the details, embraced beauty and simplicity, and created something new and delightful. Get psyched.

    I want to be sure to capture both the grief he gave me in person and what he wrote in the review. Many reviews indicated that there could be potential for some to have difficulty using the product or even wanting to make a change, though the reviewers themselves did not express the difficulty. When it came to expressing the weaknesses of the Consumer Preview, Pogue said:

    The only huge design failure is that Microsoft couldn’t just abandon “real” Windows completely — desktop, folders, taskbar and all those thousands of programs. So on a PC, hiding behind this new Start screen is what looks almost exactly like the old Windows 7, with all of its complexity.

    In other words, Windows 8 seems to favor tablets and phones. On a nontouch computer like a laptop or desktop PC, the beauty and grace of Metro feels like a facade that’s covering up the old Windows. It’s two operating systems to learn instead of one.

    . . .

    Look, it’s obvious that PC’s aren’t the center of our universe anymore. Apple maintains that you still need two operating systems — related, but different — for touch devices and computers. Microsoft is asserting that, no, you can have one single operating system on every machine, always familiar.

    The company has a point: already, the lines between computers, tablets and phones are blurring. They’re all picking up features from each other — laptops with flash memory instead of hard drives, tablets with mice and keyboards. With Windows 8, Microsoft plans to be ready for this Grand Unification Theory.

    It’s impossible to know how successful that theory will turn out to be. Windows 8 is a home run on tablets, but of course it has lost years to the iPad. (The Zune music player software was also beautiful — it was, in fact, the forerunner to Windows 8 — but it never did manage to close the iPod’s four-year head start.)

    This is as perfect as a review can get. As a note, the scoring of reviews is done by a separate research and data team at the PR agency and it as scientific as one could be at the time. Like an Olympic sport they do not like to give out perfect scores as it makes their job more difficult down the road.

    Even Wall Street got in on the positive sentiment. Normally I would not have paid any attention at all to a financial analyst view on product design, but the analyst community had become part of the cycle around an ever-expanding Apple and an ever-shrinking Microsoft. Citi research department reported the following on March 11, 2012, in a report I received:

    We continue to view windows 8 as a positive catalyst - Inputs since the Feb 29th Community Preview (beta) have continued to be positive with developer and IS momentum appearing to pick up. We continue to expect general availability of win 8 devices around Sept / Oct with ARM devices likely later (although MST and OEM goal remains coincident). Given long-term secular concerns, share gains in tablets or renewed momentum in the PC market in 2H should be positive catalysts for shares. We acknowledge that the stock has a tendency to "fade" post release and thus we believe stock performance beyond that depends on tangible success of release.

    What about the negative stories? There were some, just not that many. In fact, most of the truly negative stories were about specific features in the release. For example, one reviewer noted that ARM Windows 8 would not include Outlook, another bemoaned the lack of support for old-style enterprise management tools instead opting for modern mobile device management, and another was concerned about a keyboard change to the diagnostic boot sequence.

    When there were negative stories, they did indeed carry the themes alluded to in the Pogue review. For example, Dan Acerman at CNET wrote in a review titled “Does Windows 8 diss the PC?”:

    According to NPD Group, PCs are still the largest category for U.S. consumer technology hardware, selling $28 billion worth of desktops and laptops in 2011 (a 3-percent drop from 2010). Tablets and e-readers nearly doubled from the previous year, to $15 billion, but that was mostly on the continued strength of Apple's iPad. If you take iPad (and Android) out of the equation, interest in Windows tablets is still tiny. It will no doubt grow under Windows 8, but mostly because it has nowhere to go but up.

    So just remember, while watching everyone pinching, swiping, and tapping their Metro interface tablets and convertible laptops over the next several months in product demos, the real work is, and will continue to be, done on traditional laptops (and, yes, desktops) for a long time to come.

    From our perspective this notion of “two in one” was quite similar to the review of Windows originally in how it was first used as a way to simply run MS-DOS applications in character mode. The two decades of seamless Windows compatibility across 16-bit, 32-bit, and 64-bit perhaps created an expectation that compatibility was always essentially transparent.

    Over the course of the Consumer Preview millions of people would download and test the product. Many would write blogs, tweets, and record YouTube videos about their experience. Almost immediately, developers began to hack away at Windows and introduce utilities to bring back their favorite look and feel from past releases. All of this was perfectly normal. We’d seen this before in every release of Windows.

    One competitor took time during their own earnings call to comment on Windows 8, which frankly was quite surprising. What he had to say would definitely become a meme for the record books. In the call, a financial analyst asked the following question of Apple’s Tim Cook:

    I was wondering if you can talk about how you think about the markets for tablet and PC devices going forward. I think you've been fairly clear about saying that you believe that tablets will eclipse PCs in volume at some point. And I think you've also said they're somewhat discrete markets. There seems to be a lot of work, particularly on PC-based platforms, towards trying to combine the PC and tablet experience going forward in part because Windows 8 will be able to -- is a touch-based operating system as well. Can you comment about why you don't believe the PC or the Ultrabook and tablet markets or your MacBook Air and tablet markets won't converge? Isn't it realistic to think in a couple of years we're going to have a device that's under 2 pounds with great battery life that we can all carry around and open as a notebook or close up in a clever way and use as a tablet? Can you comment on why you don't think that product might not come or why you believe these markets are separate? (Tony Sacconaghi - Sanford C. Bernstein & Co., LLC., Research Division)

    Tim replied as follows:

    I think, Tony, anything can be forced to converge. But the problem is that products are about trade-offs, and you begin to make trade-offs to the point where what you have left at the end of the day doesn't please anyone. You can converge a toaster to a refrigerator, but those things are probably not going to be pleasing to the user. And so our view is that the tablet market is huge. And we've said that since day one. We didn't wait until we had a lot of results. We were using them here, and it was already clear to us that there was so much you could do and that the reasons that people would use those would be so broad. . . . I also believe that there is a very good market for the MacBook Air, and we continue to innovate in that product. And -- but I do think that it appeals to [somewhat --] someone that has a little bit different requirements. And you wouldn't want to put these things together because you wind up compromising in both and not pleasing either user. Some people will prefer to own both, and that's great, too. But I think to make the compromises of convergence, so -- we're not going to that party. Others might. Others might from a defensive point of view, particularly. But we're going to play in both.

    This was aimed squarely at our positioning of devices converging. It would be several more years before Apple would add a keyboard and a trackpad to the iPad Pro. Apple would then iterate over several releases trying to add different types of advanced app launching and window management, while continuing to add support for devices such as external storage, third-party microphones, etc. These were all supported on Windows in all form factors, including on ARM devices. Still, as all the reviews noted, Apple had a giant head start on the incredibly successful iPad, while Microsoft and Windows had the PC all but locked up. At least for the time being.

    If I were to be a bit delusional, I might think that Windows 8 had Apple a bit concerned. That would be crazy since in all the years I worked and competed with Apple I never knew them to care very much at all what Microsoft did outside of supporting the Mac. Instead, I would definitely give Apple credit to sticking with their narrative and strategy. They stood to benefit enormously if they could at once convince their Mac customer base that they also needed an iPad while also convincing the world that the iPad was the future of productivity computing.

    As it would turn out, Apple digging in along this line would prove to be extremely good marketing as well. By continuing to define tablets as a separate category and the future while they had a two-year head start, they made our attempt to message a different narrative extremely difficult. Many of the difficulties discussed in the next section relate to Apple’s success at defining the iPad as both distinct from PCs and also the future. That left us little room to have a device that was able to do both. The toaster-refrigerator metaphor only made that clearer.

    If we fast-forward, we can see how Apple stuck with this positioning while also making things very difficult for themselves. Given the chaos surrounding iPadOS 16.1 when it came to advancing software and the complexity of the iPad line, including the add-on keyboards, a strong case could be made that Apple has so far mismanaged the opportunity they created. Unit sales remain spectacular but perhaps less as a mainstream computer and more as a specialized device. Time will tell.

    During the first week of the release, we had over 1.6M activated testers and easily surpassed what we saw with the incredibly successful Windows 7 beta test. The Windows Store and WinRT notification platform were seeing numbers in the millions as well. We were receiving usage data from these users and were able to analyze what they were experiencing in terms of quality, reliability, and feature usage. This was an incredibly successful wide-scale beta test.

    We had months of decelerating bug fixing and a giant matrix of hardware and software compatibility to test. But that paled in comparison to the dramatic change in tone and tenor we faced, suddenly and almost out of the blue.

    Where do we start?

    On to 106. The Missing Start Menu



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Imagine building a computing platform that powers a generation. Now imagine taking the big step of building the replacement for that platform while the original needs to keep going for another decade or more. This is the story of unveiling the new Windows 8 platform for building modern apps, WinRT, at the first //build conference. The difficulty in telling this story is how everyone knows how the world came to view Windows 8. The developer conference of 2011 was a different story entirely. We still had to work through the big issue within the world of .NET developers and their extreme displeasure with the little we said about the Windows 8 developer story a few months earlier at the preview of the user experience. We had so much to share and were very excited as we made our way to Los Angeles.

    Back to 103. The End of Windows Software

    The iPad was out there and still had skeptics. Pundits continued to assert that tablets were not good devices for content creation. Techies saw it as a consumer toy for lightweight computing. This same thing had once been said of PCs, right up until they overtook computing. It was said of server PCs, right up until they overtook business workloads and cloud computing.

    Steve Jobs, at the 2010 All Things Digital D8 conference, reminded the audience that the iPad was just getting started and added, “I think we’re just scratching the surface on the kind of apps we can build for it. I think one can create a lot of content on the tablet.” By 2011, Apple was demonstrating increasing confidence in the path they had created with iPad. The iPad was already the preferred tool for the road warrior, the boardroom, and the back seat. The iPad and iPhone combined with the developer platform had become the most formidable competition Microsoft ever faced. As much as Android unit volumes concerned the Windows Phone team, there was no ignoring Apple. Some were deeply concerned about the tsunami of small Android tablets. Given what we went through with Netbooks, low quality devices, even in high volumes, concerned me less.

    The PC was moribund. The situation in Redmond became increasingly worrisome. This was despite our solid progress on Windows 8 and the interim Windows Phone release, Windows Phone 7.5.

    The chicken and egg challenge of platforms is well known. How does a platform gain traction from a standing start? Every platform faces this, but it is unusual for the established world leader to be wrestling with this problem. When I think of how the computer world had literally revolved around every utterance about Windows, it was downright depressing if not scary.

    The challenge the company faced was the dramatic loss of developer mindshare. Between web browsers, iPhone/iPad, and then Android, there was no room left for Microsoft. Win32 was legacy, a solid legacy, but a legacy. The latest efforts for Windows Phone seemed to have stalled at best. While there was a rhythm of press releases about app momentum for Windows Phone, the app numbers were tiny relative to Apple and Google and the app quality was low. Phone units were small too, meaning attracting developers was becoming more difficult not less.

    Every leadership team meeting provided another opportunity to debate the merits of using financial incentives to lure developers to the platform. And at each meeting I raised the reality of adverse selection that every competitor to Windows had learned over the past two decades. The Xbox team loved to talk about how much they spent on exclusives, but that was a walled garden world of intellectual property. In an open platform, once you’re spending money to win over developers, the least motivated developers show up with the wrong apps creating an awful cycle where paying developers attracts more of the less desirable developers building even more of the wrong apps. But not spending money seems guaranteed to lose if there’s no organic interest. This debate would become front and center with Windows 8 as we faced the same challenge.

    The concerns over the specifics of competing with iPad and Android tablets and how Microsoft and partners would respond occupied an increasingly concerned board. We had our plans for Windows 8, but the obvious question was could Microsoft do something sooner? In the summer of 2011, we were a couple of months from our developer conference in September and certainly less than 18 months from general availability of Windows 8. I assumed we could wait that long and knew we could not finish sooner. I also assumed there was no emergency product development that could finish something useful before then. That didn’t stop us from having a classic Microsoft hand-wringing series of meetings to attempt to cons up a plan. Time for yet another Gedankenexperiment as part of a series of meetings with some members of the board and others.

    We were not yet certain of the how or who of delivering ARM devices, particularly tablets, though by this point we had test hardware running and we were deep in potential designs for our own device. As a result, I was my routinely cautious self in an effort not to over-promise, especially to this group. I was, perhaps wrongly, determined not to get ahead of our own execution. Such a conservative stance was my nature but also not the norm or even appreciated. I was happy to talk about our developer conference and what was possible. The specifics required to answer when and by whom there would be a mini tablet running Windows were well ahead of where we were.

    I reviewed our progress on Windows 8, again. The problem seemed to be that we were not getting enough done nor was it soon enough. They were right. Who wouldn’t want more and sooner? From my vantage point, if we finished when we said we would it would be impressive and historically unique, even on the heels of Windows 7. How quickly people can forget the past. It felt like “more cowbell.”

    Shaving time off the schedule was discussed—it wasn’t a grounded discussion, just a wish. After that, Terry Myerson (TMyerson) and his co-leader Andy Lees (AndyLees) of the Phone team, shared the early plans for Windows Phone 8. Since Apple had made a tablet out of the iPhone, the natural question was, could we make a tablet out of Windows Phone? Of course, we could do anything (“It’s just software” as we said), but which phone and how soon?

    Some in attendance even asked if should do a “quick” project and build a tablet out of WP7 (or 7.5)? Could we take an Android tablet design and put Windows Phone 7 on it? Why not? Seemed so easy. None of these ideas could possibly happen. There’s no such thing as a quick project. The last quick hardware project Microsoft did was Kin, a poorly received smartphone.

    There wasn’t much I could do. From one perspective, Windows was suddenly the product that was holding us back and Windows Phone was the new solution to our tablet problem. Given what we were seeing in the market, Windows Phone apps, and the technical challenges of the platform, this was a ridiculous spot to land.

    Windows Phone 8, working with Nokia, introduced really big phones called phablets, pioneered by Samsung on Android. For a time, some analysts thought these larger phones would put an end to tablet demand because tablets were in between. Still, tablets continued to thrive for productivity and, as far as Apple was concerned, to become the future laptop. The iPad certainly thrived. It was the cheap Android tablets, the ones the Board was concerned about, that ended up on the trash heap with Netbooks.

    Ultimately, there was no tablet market, just an iPad market. The iPad run rate soon approximated that of consumer PC laptops. It was difficult for Microsoft to see the low-volume player as the real competitor, especially when it was Apple and the last time it was the low-volume producer it nearly died. Android was shaping up to be the Windows ecosystem on phones—high volume, low profit, endless fragility, device diversity (or randomness), and so on, but with a new OS, new OEMs, new business model, and new mobile developers who were making apps on the iPhone, though the iPhone apps always seemed a bit better.

    The Windows and Windows Phone teams would eventually go through a challenging period where in the middle of Windows 8, we on the Windows team learned that the Phone team had been taking snapshots of various subsystems of Windows 8 for use in the phone OS. This got the product to market but did not give us a chance to align on quality, security, reliability, and code maintenance. This wasn’t done in any coordinated or even transparent manner. Had we, Jon DeVaan and Grant George specifically, not intervened the company would have been set up for significant issues with security and reliability given we were not finished with the code and there was no process in place to manage “copies” of the code. Jon worked through a better process once the team got ahead of the issue. I had a super tense meeting with SteveB and the co-leaders of the Phone team about this lack of transparency and process and why it showed a lack of leadership, or even competence, on our part. In the new world of security, viruses, vulnerabilities, and more the company could not afford to be cavalier with source code like this seemed to demonstrate. This was all in addition to the lack of alignment on the developer platform as the Phone team made their early bet on Silverlight as previously discussed.

    More and sooner was a constant drumbeat throughout the Windows 8 development schedule. I was called to the carpet many times to explain where we were and why we were not finishing sooner. I was grumpy about doing that and I’m sure it showed. Schedules did not get pulled in or completed early. In the history of the company (and software), that never happened. We weren’t going to finish Windows 8 early. We would be fortunate to finish on time, mid-2012 plus two months to reach availability on new PCs.

    The leadup to the fall developer conference was a constant series of crises external to the development team but going on all around. It was stressful at the executive level for strategy reasons. Every meeting, every mail thread, every new online story was just more worry for us.

    It was stressful at the team level because we had asked the Windows team to pull together most of the company for the release.

    In particular, we wanted to fix—yes that is the right word—the schism of the past decade that had replaced what was once lockstep strategic coordination, the Win32 strategy, between the Windows and Developer product groups. The Windows team most accountable for success of the new platform was DEVX, the Developer Experience, and the counterpart to the UEX team. DEVX was led by Aleš Holeček (AlesH) in engineering, Linda Averett (LindaAv) in program management, and Yves Neyrand (YvesN) in testing. The breadth and depth required of this team was in every way as great as the user experience.

    Just as Windows 8 designed a new and modern experience, we designed a new and modern new platform to go with it. This platform was designed with the intent of growing to the future Windows platform for all apps across all form factors and devices.

    Prior to releasing to the public, we called the platform “Modern Application Model” and later the “Windows Runtime” and finally WinRT, which was the name we stuck with in honor of Windows NT. Like NT we never attached a meaning to RT and just said the letters. In this section, we’ll use the term “modern application” for new Windows 8 apps which had also been called Metro and Metro-style. We were still in the midst of resolving this legal dispute and frankly could not keep the terminology straight which confused everyone inside and out of Microsoft.

    We’ve described the experience goals of Windows 8 and to realize these goals the platform for modern apps in the experience had the following goals:

    * Native APIs for the new application model, realizing the benefits of the experience

    * Rapid application development coming from great tools supporting an API that could be mastered with relative ease

    * Amazing tools of choice, including both .NET and standard web technologies

    * Broad distribution and a great business model for apps via a new Windows App Store (covered in the next section)

    The importance of “native APIs” could not be overstated yet was also subject to many philosophical debates over what really constituted native. At one extreme, some believed native APIs simply meant they came from Microsoft and therefore first party was equivalent to native. At the other extreme native APIs implied written from the ground up from scratch, divorced from any existing capability and consequently all new. As with many debates along these lines, one could debate semantics or simply build the product and let it speak.

    I was familiar with this debate from the earliest days of Windows programming. The advent in the 1990s of object-oriented programming called for a new era of APIs that were improved over the abstractions provided natively—in other words taking the native APIs and recrafting them so they would be easier and more amenable to object-oriented techniques. The story of approaching API development this way, and our failure, was documented in section 010. Our BillG Review. In that story, we developed a new API for Windows that was much prettier and tuned to C++ but was also bloated and different than Windows. This failure led to a super lean, efficient, and Windows-centric API that became the standard for building professional apps for the Win32 era, the Microsoft Foundation Classes, MFC.

    Without diving too deeply into the weeds of architecture, there is an important point about systems design to make. What separated MFC from the era of object-oriented frameworks was, as described previously, a bet on the underlying platform to do the work. Our framework did as little work as possible and specifically never duplicated Windows functionality. That strategy permitted only one implementation of any given concept to exist and thus defines native. In that case, Win32 remained the native API because of the choices we made in MFC.

    These choices were the exact opposite of those made in the various .NET technologies, which liberally duplicated capabilities in Windows. This duplication created both inefficiencies and deeper problems. Incredibly difficult problems to solve such as security, accessibility, localization, and so on were made vastly more difficult when there were potentially two (or more) ways to accomplish the same task or worse one way in a framework that made partial use of the OS. In this type of framework there was complete ambiguity as to what constituted native.

    In the modern context, this debate over native capabilities of a platform is the situation currently consuming the Apple strategy of SwiftUI. SwiftUI is a layer on top of the existing native operating system API, but it also reimplements functionality and provides different implementations of other capabilities. Apple is attempting to decree the SwiftUI the answer, but even struggles themselves in moving apps from the old way to the new way. In the context of this story, SwiftUI and the debates surrounding it have a ring of .NET familiarity. I’ve been surprised at the approach but not the result. Apple has the ability to drive change across existing code that we did not have as discussed in the previous section which might smooth out this strategy over time. On the other hand, this entire section describes the outcome of fracturing even the most robust of developer ecosystems.

    The approach for Windows 8 was decidedly to define new native APIs and implement those APIs such that there was a single code path for all capabilities. Sometimes the code was brand new to Windows and other times WinRT relied on existing code paths and reused them. Taken as a whole, the set of capabilities represented the first version of the future APIs.

    The first version of Windows with any utility, Windows 3.0, was about 540 APIs and documented beautifully in the book Programming Windows by Charles Petzold that occupied a rarified position on every developer’s thick oak bookcase back in the day. Most of us just referred to the book as “Petzold.”

    When Windows 3.0 arrived, it was not exactly meeting a known market demand. The platform was intriguing for a class of developers, some of whom had been working on Macintosh apps and others who just saw opportunity in a new way to program user interface. These developers, much like on Macintosh, could pick up a copy of “Petzold” and within a day or so would find their way writing Windows programs, often as a hobby or side project to impress people at work. It was exciting and rewarding. Within a relatively short time the effort paid off with a solid mastery of the 540 APIs. While there was all sorts of complexity and wizardry in the way tools worked, the breadth was such mastery was achievable by a single person.

    The property of mastery is an incredibly compelling part of any new platform and often overlooked. The ability to master the platform and to use that mastery to complete an entire project is when something magical can happen, often for a just one or two engineers working together.

    By the 2000s, there was no mastery of the Windows API to be had, by anyone. At the very best, one could master a subsystem of Windows such as networking, DirectX graphics, or the file system. Of course there were some wizards, but most of them worked at Microsoft or soon would. Even the largest software houses would struggle with building reliable and optimal Windows software. When the complexity of the various .NET, data access, graphics, and networking APIs were added to the mix the system was insurmountable. A huge part of the attraction of various aspects of .NET, such as Silverlight, was the notion that one could master some subset and complete a project. Unfortunately, as discussed, these runtimes were limited enough that one often needed to venture into other parts of Windows to complete projects. There is no escaping that the explosion of headcount and lack of management accountability to that proved to be our worst enemy when it came to strategic platform coherence for thousands of APIs.

    Our Windows 8 plan was to create a new API that covered the full range of development scenarios we expected (or more accurately hoped) to see in the 1.0 version of the modern app world. That API would be the equivalent of “Petzold.” Due to the increasing capabilities of the underlying OS and development tools, the resulting apps would be vastly more capable and easier to write and debug.

    Over the summer as we completed the software build that we would distribute to developers, the DEVX team created “Elements of a Windows tailored app: The Developer Story” which captured the “as we built it” mindset and working terminology used by the DEVX and Developer division teams.

    The team took a scenario approach cataloging many types of potential applications along with how the platform could be used to write them. It also detailed how to reuse existing code, such as line of business code, within applications. At 275 pages it represented the amount of material one could master as an individual or team of 2 or 3 people.

    The modern platform consisted of three main pillars: modern application behavior, application connectivity, and modern application user interface. Within the context of our ever-confusing terminology, the platform is WinRT which we also called the Metro platform.

    One of the biggest changes developers needed to understand was that modern applications did not take over the computer. While many had become increasingly aware of the security model of Windows, which prevented access to various parts of the system without a password, the modern approach to applications meant that there was no access at all to the hardware or to other parts of the system. Every application ran in its own “container” (sometimes called “sandbox”) or essentially walled garden. Most requests to the operating system went through an intermediate “broker” to ascertain whether such an operation would be permitted. For example, in Win32 an application wishing to use the camera would simply connect to the camera as a device and start using it. In modern applications, the request depended on whether the user had given permission to use the camera and importantly whether the developer built into the app the proper requests of the user for permission. This was a dramatic change for Windows and was only just starting to make its way to phones, whereas today we are all too familiar with the endless prompts for permissions to devices. These constrains placed on applications created a new level of trust and security for Windows, though at the expense of the kind of flexibility and “anything is possible” that Windows developers had come to expect.

    Applications also behaved differently relative to how they could consume battery power by constantly running. In Win32, every application ran all the time. It did not matter if the user expected the application to be doing something even if the user could not see the application, it was simply how applications behaved. Every little icon on the system tray, every application that created a “background” process, and every window on the screen could be running on the CPU while consuming power. In a modern application, if an application was not visible to the user, then it entered a new state called “suspended” and consumed no power at all. Only when the app was visible did it resume operating as normal. In the desktop computing model, every window must be constantly updated whether the user is looking at the contents or not with each update draining battery power.

    This architecture was essential to reduce the power consumption of a modern PC. It was, somewhat surprisingly, controversial. Developers with their large screens envisioned a world with many tasks happening in many windows and work as keeping an eye on all the different windows. The billions on the verge of using phones and tablets, however, were using one app at a time and the screen filled with that app. They were immersed in the app. Switching between apps was instant and tasks resumed instantly when needed. By and large this style of usage so typical on phones mapped to how real-world customers used even current Windows laptops. Most all Windows users had for the longest time, and still, ran applications full screen and worked on one app at a time. Even switching between apps followed the way apps on phones worked. We built support for snapping apps side by side to provide two apps at once.

    Still, creating only full screen apps with the new platform would be an ongoing debate or discussion with developers. It was the kind of change that was perfectly fine for the broad user base and certainly for future new users, but not what developers themselves wanted. For developers of course the desktop was still there where all their tools ran anyway. We expected developers to focus on the desktop and they did.

    This was one of a number of issues where the initial audience seeing Windows 8 did not see the world the way we saw the next billion customers heading. They had their jobs to do. It reminded me of the early days of Word when the project was not going well and a Word developer asserted that they aimed “to please an audience of one. I write it for me. If I'm happy I know some cool people will like it.” which was a rather limiting way to think of word processors in hindsight.

    A distinguishing characteristic of modern applications was how they connected to each other and to Windows. In Win32, apps had a limited way of sharing information with any other app and that was through the clipboard via copy and paste. To do more usually required applications to open up files from one app inside another. This led to all sorts of potential risks, unbeknownst to users, as we learned with Office. Apps could insert malicious data into other apps or apps could crash opening data files created with another app simply because they did not fully understand the other app’s files. Worse, apps could simply navigate the hard drive and gobble up all of a user’s files (in the background!) without their knowledge.

    Modern apps were therefore isolated from each other. This would be quite limiting if not for the invention of “contracts” between apps, and between the operating system and app. With contracts apps could easily tell Windows 8 they would accept or provide information to other apps. Windows 8 created super easy mechanisms for apps to share their information with other apps while at the same time accepting information from other apps. This all took place only when a user initiated an action. Across the system, inserting data such as photos, searching across apps, opening files, and sharing information within a file were all just a tap or click away. A key aspect of the design of the platform was to make it as trivial as possible for an app to add these features and access them via the charms.

    There were other connectivity features for modern apps as well, all of which were implemented with security and privacy in mind while doing so with a minimal amount of code. Connecting to a printer, a speaker, or camera could be done with minimal code and trivially. One of our most common demonstrations showed how easy it was in the platform to add a live video feed from a camera with only a couple of lines of code.

    There weren’t just new-fangled architectural changes in Windows 8. Even the mundane such as how displays worked underwent a dramatic rethinking. At one of the early meetings on Windows 7 tablets when we first showed touch to the PC team at Intel, their ever-enthusiastic platform leader, Mooly Eden, had a question for Windows. He couldn’t just ask he had to show us. Mooly always had to put on a show. He stood up and started flipping a new Windows 7 slate in the air, spinning it. He wanted to know “when will the screen image also rotate?” He was right. Windows had no ability to do what the iPhone and iPad did so well, simply handle the rotation of the screen, essential for a handheld device. Supporting this first required integrating a sensor to detect the rotation, something that no PCs had though Windows 7 provided some early support. Second, the whole video subsystem needed to be reworked. While it was possible to switch orientation of a display in Windows 7, doing so was cumbersome and involved a great deal of flickering and prayers to the Windows control panel. With little fanfare, Windows 8 added this along with support in the modern platform for apps to easily manage the transition between orientations.

    At the other end of the spectrum of screen sizes, Windows 8 built the correct architecture to handle the new high-resolution screens coming on market. These screens would come to characterize iPhone and iPad as retina screens. While they were available as external displays for Windows, they were always buggy and flakey. Buttons or text would show up super tiny or where apps would click would not match on screen where the click took place. This failure of Win32 was rooted in the inability to uplevel the whole installed base of hardware and software as the hardware advanced leaving the old software in the dust. Supporting these new screens was one of over 100 features in Windows that improved the desktop and one of many features that were newly native in the modern Windows 8 platform.

    Most would judge the completeness of a new platform for how it enabled them to build the main features of their application. In many ways this was the equivalent of Petzold and included the platform for creating the interaction between a developer’s app and their customers. Relative to Win32 and to the chaos of the .NET world, Windows 8 arrived with an incredibly strong story. Much of this came about because of our own efforts building applications for Windows 8 and the scenario planning for the product. Windows 8 provided a palette of over two dozen “controls” or user interface elements typically used in apps. While one could find these and more via third parties or counting everything offered by all of Microsoft’s existing platforms, never had the Windows platform delivered the full set of required controls all at once. This resonated with me personally because of my own history. Back in the early 1990s when toolbars were invented and developers wanted to add them, Windows provided no support even though Windows itself had toolbars. For Visual C++ I worked with a developer on the Windows 95 team who wrote those controls and added them to MFC, using the Windows code. That developer was also part of the Windows 8 team, so kind of full circle.

    A key part of modern apps was how they fit in with the design language for Windows 8 to provide a beautiful, aligned, and consistent experience. Such polish and attention to detail was not something Windows had historically delivered. By and large Windows deferred to the marquee apps like Word and Excel to define the proper look of Windows apps though did not support that look with code or APIs. Windows 8 provided developers with a broad set of tools for typography, animations, color selection, user interface grids to align applications, and more.

    Fitting into Windows also meant providing the support for system-wide features such as Live tiles, notifications and alerts, as well as all the necessary infrastructure to easily install and deploy apps. These were also features that Windows provided piecemeal, if at all, in the past.

    Finally, everything discussed above was designed from the start to be programmed from all of the programming languages and tools offered by Microsoft. Developers could build modern apps using web tools of HTML and JavaScript, C#, XAML, Silverlight, and even C++ or Visual Basic. Every demo we planned on showing at the upcoming conference included use of all of these languages. We were ready to atone for the lack of communication over the summer.

    The platform as delivered to attendees at the upcoming conference was as complete as anything ever delivered by Microsoft all at once, and in the first version. We knew we had a good deal to learn and a list a mile long of what we could do in the future. That was a given. We were so excited though to deliver the future of the Windows platform.

    We arrived in Los Angeles on September 12, 2011 for our Professional Developers Conference, renamed //build to be attended by 5,000 people and what seemed like the entire technical press community. Marketing for the new Windows 8 Start screen and the new //build logo was everywhere—those live tiles, colorful rectangles, and animation. It was a breathtaking sight. And we were excited to be there. Even the Denny’s across the street welcomed Microsoft for breakfast, with a sign. A pre-show tradition for me is breakfast at Denny’s.

    Unfortunately, there was also a something of sense of dread about the languages and tools for creating new modern apps—our self-inflicted crisis created at the unveiling of the user experience and the omission of .NET before the summer. There was so much cynicism in how the various existing platforms had been talked about—always implying one worked well with the other when in fact they were all separate, all had limitations that were not said, and all had major issues. As a result, however, we created the opportunity for a set of people to spend the summer alternating between conspiracy theories and anger over the developer platform.

    The effort to make every single Microsoft language a first-class tool for the modern Windows platform was an immense cross-division project. To say it was easy or lacked controversy would be a huge understatement. It was extraordinarily difficult because many inside Microsoft also either believed or even wanted to believe the conspiracy theories surrounding the deemphasis of .NET by Windows. In some sense they had a right to as they believed in their own work and lived through the alienation and schism that Vista created between .NET and Windows. Windows 7 did offer any relief though it also did not do anything to make it worse. In that sense it was hardly a conspiracy theory, but a fact of life in the .NET world.

    Windows 8, however, was when the strategic issues of the past required resolution. There was no simple answer. Realistically, the answer would require a degree of subtlety that tested those on every side of the mess, myself included. We absolutely intended to move forward with the languages of the .NET world (C#, XAML, Silverlight, VB, C++) while at the same time there was no plan to simply move all the code created that used the libraries and frameworks of the .NET era such as WinForms, WPF, VB forms, or Forms³ or third parties. This “technicality” would prove to be an enormous barrier to not only acceptance but even acknowledging the strategy we were embarking upon. To many, not being able to simply port their code to new Windows 8 apps was a non-starter. We had accounted for this by describing the difference between domain-specific code (for example all the code that connects to a remote database or cloud service) and the user experience code that would need to be reimplemented. Most all apps written in this world were divided cleanly in this respect. Not only that, this transition was precisely the one most had gone through to provide web access to the databases and cloud services they built.

    We had two challenges. First, within the Microsoft bubble it was clear “everyone” used these technologies. The Developer division had created a bubble of the .NET developer world insulated, as previously described, from the world of HTML and the web browser. Within that world it was of course true that everyone used .NET. Anything we proposed that was not about leveraging that investment was by definition failing to leverage something everyone used. Arguing over how much .NET was used as a portion of total development or skilled developers was futile, especially because even defining terms was impossible as we’d learned for several years of Mid-Year Reviews and Developer Usage Surveys. Even discussing the lack of commercial software would only serve to exacerbate the challenges.

    Second, the existing .NET world was the one that failed to account for synergy with Windows or address the problems we set out to resolve with a new platform. When it came to consistency with Windows, safety, security, reliability, battery life, or even basic attributes such as performance or accessibility, the plethora of .NET tools failed to achieve a level of performance or quality consistent with our main competitor, Apple.

    These two challenges were not just external to Microsoft. The .NET bubble extended to the 50,000 person Microsoft enterprise organization that were the tip of the .NET spear and also the massive Developer division itself. Many people had spent a decade building out these capabilities and all was going well until, well, I came along.

    It was no surprise that the whole time Windows tried to create the connections to the .NET languages to use the new Windows 8 platform we ran into discussions about interoperability or conflicts with the existing .NET platform and APIs. Each new advance in Windows 8 was met with an opportunity to debate the reuse a .NET mechanism to accomplish the same—there was a near constant effort to work to insert the old .NET into the new platform. Windows 8 had a strong point of view about a platform for the future that embraced a new set of qualities entirely absent from the existing .NET efforts.

    We intended to deliver on that point of view. The good news, actually great news, was that the vast majority of efforts between the teams supported this mission. That didn’t preclude grumbling along the way. There were some outright stressful times. The release of code would surface many of those, either internally or unfortunately externally via the press as well.

    Since we know now that Windows 8 did not achieve the success we hoped, this is a reasonable time to point out that it is precisely at this moment that the efforts between teams created the “told you so” people. In Microsoft parlance this is done by opening up Outlook and searching through Sent Items and reliving these debates to prove that no one listened, or someone knew all along. Success has many parents and failure is an orphan. No matter what happens someone always said it would. This isn’t about bitterness on any side, but I can promise many more people contributed positively at the time than seemed to contribute in hindsight. We’ll return to this reality.

    We had an incredible keynote planed for the first morning. It would feature a demo of Windows 8 experience from Julie Larson-Green, an incredible overview of potential Windows 8 devices from Mike Angiulo, for the first Chris Jones detailed the many cloud-based services integral to Windows 8 experience, and on-stage Antoine Leblond would build a modern app using HTML and Silverlight. Even I did a demo showing only features improved in the desktop. It was a two hour and fifteen-minute tour de force by the full Windows 8 product executive team.

    I had been to a lot (a lot a lot) of Microsoft keynotes and conferences and launch events over the years. I participated in many as well. With as much sincerity as I can muster, the keynote for the 2011 //build conference was absolutely Microsoft at its best. The whole of Microsoft had never come together to deliver such a coherent and compelling message. It was all done live on real computers running real software. Nothing like it has happened since and it is clear we’re in a new era where that 2011 presentation might have marked the end of the PC era live event. Even Apple would cease to take on the kind of risks we took on during that demo when it came to real products, demonstrated by real executives, in front of a live crowd while streamed around the world. It scares me now to think of just how crazy we were. It was by far one of my favorite days at Microsoft.

    Within that favorite day were two favorite moments. Technically, three if you count the walk-on music at the very top of the keynote that I chose. I walked on to Renegade by Jay Z (with Eminem)—“No lie, just know I chose my own fate./I drove by the fork in the road and went straight.” Certainly, over the top. But I felt we could have taken an easy path and just built yet another version of Windows doing more of the same, there were many options for how to do that the equivalent of going left or right. Instead, we created our own path and went straight. The short sound clip is not in the official video because of copyright.

    Over the summer we had become increasingly concerned that developers would have no way to experience Windows 8 on the type of hardware we expected to exist in the future. We were working with OEMs closely to create PCs with touch panels, ink pens, and in a variety of form factors from pure slates to convertibles to laptops to all-in-ones. While we tried to have some of those machines for Windows 7, they were few and not commonly owned because Windows 7 lacked the value proposition to drive sales.

    The Ecosystem team had been working closely with Samsung on an Intel-based PC that was a pure tablet form factor with an available docking station which could be used to connect a mouse, keyboard, external display, and wired network. The PC was powered by the latest low voltage Intel Core processor. Many of our demos and most all of the demo sessions (over 100!) would use this PC at the show.

    At the 2009 PDC we were excited enough by the prospects of touch computing that we provided attendees a free touch laptop. It was a new “thin and light” form factor which was just starting to come to market to compete with Apple. This announcement stunned the audience. Therefore, by the 2011 //build attendees were certain there would be another free laptop. As a point of information that stunt in 2009 cost millions of dollars more than even the revenue from the paying conference attendees.

    In the keynote, MikeAng, ever the showman, walked through all of the platform capabilities that the ecosystem and developers could bring to life with Windows 8. Showing everything from ARM-based devices to liquid cooled gamer rigs the size of dorm refrigerators. Then he began to describe a new Samsung device. He said, slyly, the device was being produced in small numbers right now as a prototype. I looked at him and asked “so how many will they make” to which he replied “well, I’ve been told not to bring a cool PC unless we brought enough for everyone so there are 5,000 out on the loading dock right now.”

    And with that we announced the “Samsung Developer Prototype PC” to go with the “Microsoft Windows 8 Preview Build.” And everyone went nuts. The key value of this PC was precisely to demonstrate the value of Windows 8. With one PC running Intel developers could run and develop Windows 8 applications for both Intel and ARM devices. The Samsung came loaded with Windows 8 software, sample apps, Visual Studio pre-release, and more.

    Little did the 5000 attendees know but over the previous week a team of about 40 Microsoft engineers had been unloading the Samsung PCs from crates air-shipped from Korea to install all this new software and configure them for distribution. In order to get them to Los Angeles in time, Samsung had to manufacture them before the software was complete. It was an heroic effort and made it all the more rewarding for sure.

    If you’re interested in more about this, Raymond Chen (RaymondC) a member of the Windows team (UEX) and author of “The Old New Thing” wrote a post with some insider trivia, Some trivia about the //build/ 2011 conference.

    Loaded on the Samsung were 17 new Windows 8 applications, the source of my second favorite moment of the day. In addition to the apps from Microsoft such as Mail, Calendar, Bing, etc. each visible on the Start screens shown throughout the week, these 17 apps represented a chance to have fun with the pre-release. The amazing feature of these apps were that they were built entirely by the interns over their 10-week summer jobs. We asked 17 groups of 2-3 interns to work together to build these fun apps from scratch and zero baseline understanding of Windows 8. Not only were they able to create these apps on their own, but they did so when neither the platform nor the tools were near ready and were constantly shifting. I was so proud to both announce and share their summer accomplishments. While for most school had already started we were able to find about 20 of them who were able to sneak away from school to attend the conference and even provide a scheduled sessions detailing their experience. It was as amazing for them as it was for me. It was a career highlight for me. They even made a video of their summer experience which we showed to all the show attendees in the keynote.

    That this group took their ideas with an unfinished platform and tools to create new apps was a huge source of validation for the “Petzold” test we hoped Windows 8 would pass.

    After providing attendees with a candid roadmap from this moment to final release, there was a short break. We were not done with amazing day one content though. There were two absolutely amazing presentations to follow. We overwhelmed attendees with depth and quality of presentations.

    First, JensenH provided an overview from the experience perspective of what makes for a great Windows 8 app. This was the touchstone presentation that brought together all we had learned to date, since the unveiling and subsequent feedback as well.

    Following this, AlesH brought things down to the metal and showed the developers in attendance the ins and outs of the Windows 8 platform. He wrote a ton of code and built live apps on stage ably assisted by John Sheehan (JSheehan) who was a senior software engineer on the platform. Importantly, they showed off building apps from scratch using Visual Studio, connecting to Windows 8 to share information and use devices, and also detailed the architecture that provided apps with a sandbox to run in for safety, security, and predictability. They even showed off how apps don’t consume resources when they are not visible on the screen. Of course they used all the .NET languages and showed the broad support for bringing forward that knowledge.

    Even with all that incredible content, there was one slide we worked the most on and that was a boxes and b******t, or politely boxology, slide showing the new Windows 8 platform and its relation to all of the programming languages and APIs Microsoft had to offer.

    This was not a slide I dreamed up in my office, not even close. Aleš in his role as distinguished engineering leading DEVX and the Windows 8 platform worked for weeks to create this diagram, collaborating with every part of the whole project across Windows and Developer division. I thought it was a work of art.

    Aleš also made the slide a key part of his presentation. By the time he spoke the proverbial s**t had hit the fan and he knew he had to somewhat rescue his slide. People had already written instant blog posts and the live blogs picked up on the diagram as well.

    So what happened?

    Presented in all the glory of the Metro design style, huge and on stage, was as clear a diagram as we could have created describing the Windows 7 world being brought forward for desktop applications, including support for Internet Explorer, Win32, .Net, Silverlight, HTML, C++, C#, and more. Everything had a box, resting on top of the Windows Kernel (a.k.a. MinWin!). New modern apps were shown in boxes with all the tools and languages available to build “on top of” WinRT, including that same list (XAML, HTML, C#, C++, VB, and JavaScript).

    The diagram, created by dozens of people from nearly every team in the company, proved incredibly popular, but not in a good way.

    To support the diagram, Antoine’s full demonstration of building a Metro app went through all the languages and tools, in real time on stage, by building an app. He built several apps. He built an app in HTML. Antoine took sample code from Developer division vice president (and previously manager of Silverlight) Scott Guthrie’s (ScottGu) famous (amongst .NET fans) XAML sample, and with no changes ran it as both a desktop app and a Metro app on WinRT. The new Metro apps were touch-enabled, integrated with search and sharing, and even showed off the beautiful graphical qualities of the platform by displaying nicely alongside the rest of Windows 8.

    The problem?

    Before I even got off stage, people were taking the image and redrawing their view of the architecture. Some were critical of precisely how their favorite tools were rendered. Others challenged the technical integrity of the diagram. Within the developer press there were even news stories and a quite a few blog posts dissecting the travesty that was this image. It all happened in real time before we had even finished.

    There were blog posts that dissected the diagram and rewrote it. It was the technical buzzsaw applied to a conceptual diagram. The weirdest part for me was that this was a diagram we had created about our own technologies, but the .NET community thought of them as their technologies. In reality, they wanted Windows 8 to work like Windows 7 while further adopting their .NET technologies. They wanted Windows 8 to fulfill the .NET promises they believed were made, and even more so now that we were indeed delivering a new platform. There was little accounting for what we believed we needed to accomplish in righting the Windows platform or the necessary steps to modernize the Windows platform.

    Some of the blogs centered around how much code was actually Win32 though repurposed, not because that would be bad but because of a theory that would undermine our argument about the modern platform being native or completely new. Others took aim at the use of the .NET languages versus the .NET runtimes or libraries. This issue was the whole point, unfortunately. Ironically for me this point also was one I made to BillG for years when it came to the value of owning a proprietary language versus owning the API. Now all those championing a language were learning the value was really in the runtime. Yes, that was always my point.

    Emotions were running high. There were deep concerns about investments in existing code and porting it to modern apps. We didn’t make this promise to the degree developers wanted and, as a result, there was a wave of feedback over Microsoft abandoning them and costing “millions” of dollars of lost revenue. That’s what people said. To me, this had shades of past transitions, specifically, the major mistake Microsoft Word made in trying to reuse the code for character mode Word to build Windows Word. It did not work and cost the team years. It also felt like the carnage caused when much-loved Visual Basic switched to .NET.

    The biggest mistake in platform transitions is attempting to deny that a transition is taking place or to believe a dramatic platform shift could be managed by incremental changes.

    Not everyone writing about this diagram took such a negative approach. Several provided thoughtful views pointing out that there was a good deal to be excited about. In an in-depth view, Paul Thurrott wrote on his SuperSite for Windows blog:

    WinRT solves many of the problems of Win32, from an apps perspective, though it can't be used for certain things of course (writing NT services, drivers, and the like). Apps created for WinRT are safe, secure, and sandboxed, can't wreck other apps, can't cause "Windows rot," and all install in just 2-3 seconds. They feature isolated storage, single folder installs (as per the Mac), and require user consent to access the general file system. When Microsoft says it "reimagined" every aspect of Windows, this new runtime, or application model, must be included as well. This is part of it, a modern, truly different app platform that is far more different, and far more capable, than many people understand right now.

    And in the same vein of blowing past peoples' expectations, virtually no app could not be written as a WinRT app. Many are imagining very simple, HTML-like apps, and while I'm sure there will be plenty of those, you need to reset your expectations up. WinRT is amazingly full-featured and not constrained to goofy utilities and simple games. The next "Call of Duty" could be a WinRT app, complete with support for Edge UIs and Charms.

    And here's something interesting: Virtually all of the Microsoft WinRT apps--Mail, Calendar, People, Chat, and so on--are all written in HTML and JavaScript, not C# or another supposedly superior language.

    And you laughed when they repeated "Windows reimagined over and over again in Tuesday's keynote. I'm starting to think they didn't push this point enough.

    Our main aim for //build was to put forth the message of a bold reimagination of Windows from the chipset to the experience. That was what we wrote in the press release.

    So many of the reviews of the pre-release product articulated the bold design, the attractiveness, the functionality across different types of PCs, and the compatibility with everything in Windows 7. The note of caution was always about the amount of change and how some users might be skeptical. Overall, the press and bloggers were excited and impressed with the leap we had taken, excepting those still debating the diagram.

    After a month with the pre-release code, ArsTechnica in its detailed review had the following to say in “Turning to the past to power Windows’ future: An in-depth look at WinRT”:

    Microsoft is taking a risk with WinRT, as developers may not follow it, but it's a calculated risk, and it's a low risk. The re-use and modernization of existing technology gives WinRT instant familiarity for developers, but also an easy path for Microsoft to extend the reach and capabilities (and ease the restrictions) should it need to. It's at once the embodiment of Windows' future and the embracing of Windows' past.

    I shared how I thought this event was Microsoft at its best. The press coverage following the event generally agreed. I recognize in hindsight this might seem to be a combination of delusional and revisionist. It was neither. There were over 1,000 original stories of which nearly 700 were what we called PRIME, an acronym defined by the PR agency for stories appearing by top writers in top publications. On average the stories were extremely positive. In a system the agency used to score the tone and message pick up of stories where 100 is neutral, we scored 124 in 386 news stories, 118 in 293 blogs, and 117 in 11 product reviews done in real time. Microsoft events routinely scored a negative sentiment during this era, meaning less than 100.

    The agency put together an End of Event recap that ran for over 120 pages. These were tremendous results, not solely because of the content by any stretch. Achieving these results required an amazing marketing and communications effort around the world.

    With no intention of shaming any writers who subsequently revised their views or chose to come to different conclusions later, some of the quotes that came along with the availability of code and the Samsung PC press loaners were positively effusive:

    “I've never witnessed this much excitement in a Microsoft audience at a keynote before. It's electric.” –Tom Warren, WinRumors

    “I might just become a developer considering how easy #Microsoft has made it to write things for Windows 8.” –@reese305

    “my iPad suddenly became old fashioned #win8 #bldwin” –@martinsih

    “Hello, Windows 8? This is iPad. You win." - @thurrott

    “the Metro UI stuff is the best thing Microsoft has come up with in years.” @thurrott

    “Here’s the real feather in Windows 8’s cap: Write once, run (mostly) everywhere software.”, PC World, Nate Ralph

    “the experience still felt magic, and that’s what really counts. Well done Microsoft.”, RedMonk, James Governor

    "Apple bloggers were apparently so flustered by the platform that they resorted to bombarding Twitter with jokes about cooling fans and Silverlight instead of stopping for a moment to realize that Microsoft is showing us the future of computing.", BGR, Zach Epstein

    Obviously, it wasn’t all positive. Surprisingly, some still questioned whether we would release “on time” though we did not provide a date we knew we would hit the fall. The real lingering question, and one that did not come up all that much at the event, was the “two modes” of Windows. Speaking for “users” many press would rhetorically ask if users would be able to understand the two modes of using Windows, the desktop, and the new Metro experience. Owing to the developer audience, they were not yet concerned about ARM and simply saw it as an opportunity. They would be running Intel because that is where development tools existed.

    The issue of “two modes” was the one we needed to tackle. I wasn’t sure of what else to do. Given the history of Windows “modes” and what we designed for, the complaint and answer were not easily reconciled. Would this become the focus of the release?

    With all that, including the protestations around our box diagram, I had wrapped up one of my happiest weeks at Microsoft. It had been a long time since the whole of Microsoft came together as a team and delivered a strategy and produced an event with such clarity and developer excitement.

    In under 24 hours we had 500,000 downloads of the Windows 8 Developer Preview Release. At the time there were perhaps three million professional software developers worldwide, total.

    It was Microsoft at its best, but the reactions turned out to be relatively fleeting. The developer community was but one audience. For all the dustup over .NET they were the home crowd at //build.

    We had a nagging issue of competing with Apple not just on iPads but on laptops. Could we work with the ecosystem to muster something to compete with Apple’s MacBook Air, finally after years of failing to even try?

    What would mainstream consumers say with a much more polished release? What would developers say about the new Windows Store which we had yet to show them? What would we tell developers waiting to see how Microsoft Office would build for Windows 8?

    On to 105. New Ultrabooks, Old Office, and the Big Consumer Preview



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • A reasonable question to ask is “Why did Windows 8 need to create a new platform?” Not only did Microsoft have Win32, the tried-and-true real and compatible Windows platform, but the company had pioneered the .NET platform and with Windows Phone 7 extended that platform to phones with Silverlight. This post is my take on the history and how we ended up at this point. It takes us way back and shows how sometimes what emerges as a major strategic problem can trace its origins back much earlier than one might think. In the next section we will unveil the platform to developers at the //build conference.

    Back to 102. The Experience

    The Windows platform and associated app ecosystem were sick. Across product executives we had a very tough time coming to grips with the abysmal situation. We definitely could not agree on when the situation turned dire or why or if it had. That meant doing something about it was going to be challenging.

    Some were so desperate for good news that they grabbed on to any shred of optimism. At one of the infamous Mid-Year Review (MYR) meetings during the development of Windows 7, a country general manager proudly presented their results of the annual developer survey designed to show what platform developers are coding for and the tools they use. The head of the developer segment for India said they were seeing Windows rise to the top of the chart for the most interesting and targeted platform. Windows! Immediately the room woke up from its MYR-induced stupor. Questions and comments were flying, “What outreach did you do?” “Did you start in University comp sci programs?” “How did you use financial incentives?”

    The optimism was misplaced. The realty was even more bleak than a benign survey outlier, a common occurrence when compensation and corporate metrics were attached to surveys. There was no surge in Windows development. Nope, it was the opposite. India had become a favorite location for companies to outsource their legacy Windows software development. We weren’t measuring an uptick. We were literally measuring the final blow to the Windows development ecosystem as companies everywhere looked to place development out of sight and out of mind. I hate to say so, but it was obvious that’s what the data showed. Microsoft itself had incented teams to transition projects to India, and not often the most strategic ones as I had learned when we had to reconstitute the Windows sustaining engineering team.

    Through the 1990s and the rise of Windows, BillG hosted an annual dinner for the largest and most important Windows ISVs, the CEOs and founders of leading tech companies of the era. The dinner was always a star-studded affair featuring the legends of the industry including Philippe Kahn, Jim Manzi, Ray Ozzie, Paul and George Grayson, David Fulton, Fred Gibbons, Scott Cook and perhaps 50 more. These were the leaders of the new industry each presiding over companies with hundreds of millions or billions of dollars of revenue. The companies built the tools from my earliest PC days: Borland Turbo Pascal, Lotus 1-2-3, Lotus Notes, Micrografix, FoxPro, Harvard Graphics, Quicken, and more.

    As Windows won, ironically the health of these companies declined. There was a natural consolidation. Many were acquired, and their products slowly faded. Microsoft had its competitive products and the rise of Office, Visual C++, Outlook, and others certainly contributed. Microsoft’s singular bet on Windows and early success on Macintosh were factors as well.

    It was, however, the internet and the web browser that really changed everything. The above ISVs started their companies in the 1980s on MS-DOS or in the early 1990s on Windows. Anyone starting a company, particularly in Silicon Valley, in the late 1990s started as a web company. Many startups created enterprise software, though we tend to remember the rise of Yahoo, Google, and later Facebook.

    A look at the top software companies in 2010 read like a list of industrial giants more than pure play software. PwC published a list, Global 100 Software Leaders, that illustrated how the industry had changed. Among the top 100, there were only three that made tools or productivity software primarily aimed at Windows or Mac: Adobe, Autodesk, and Intuit. There were even more companies that built safety, security, or management tools addressing shortcomings of the PC: McAfee, Symantec, TrendMicro, Citrix, etc. Most of the companies were either transitioning or transitioned completely to web-based interfaces running against datacenter software.

    The real end of the ISV dinner happened for me in 1999. The Microsoft Business Productivity Group (BPG) led by Microsoft senior vice president Bob Muglia (BobMu) announced a deal to acquire Visio Corporation for approximately $1.5 billion, Microsoft’s largest acquisition at the time. Visio was a profitable company approaching $200 million per year in revenue and nearly $30 million in net income. Bob was my manager though I didn’t know anything about the deal.

    The Visio Corp. founders were the fantastic team of Jeremy Jaech, Ted Johnson, and Dave Walter. Jeremy and Dave had previously co-founded Seattle-based Aldus Corporation where Ted later led the engineering for Windows PageMaker. Aldus was acquired by Adobe in 1994.

    Visio was a wonderful product and from the very start engineered a strong affinity to Microsoft Office, often working jointly with us on marketing, sales, and even consistency of product design. The product, an easy-to-use diagramming application, pioneered many structured drawing tools we take for granted today such as stencils of shapes, the ability to modify shape geometry, and magnetic connectors between shapes. The company was headquartered in downtown Seattle when Microsoft didn’t even have offices across the bridge.

    Visio was the last poster child of the old ISV world left standing to have started out as a Windows software company. It was one of the first applications developed for Windows 95, a fact that was heavily promoted. Strategically it used everything Microsoft could throw at it from Visual Basic automation to data access APIs to OLE. Ted once joked with me that they would have used Clippy if we gave them the chance.

    Ted, who stayed on for a bit with Microsoft and then later returned to work on Internet Explorer (TedJ), was always candid about their journey. He was wonderful to talk with and had the experiences of a founder, which was something I sought out. His view was clear that the days of being a breakthrough independent developer focused exclusively on Windows were over. It wasn’t just the browser, but also the demands of building out an enterprise sales force and having a full product line exclusively devoted to being a Windows ISV. This was even before the rise of mobile.

    I had a few conversations with BobMu and SteveB over whether it was a good idea for the health of the ecosystem that we participate in the ongoing consolidation. I think they heard what I was saying as a resistance to the deal because I’d end up managing the team—that wasn’t it at all and I loved the team and product. Visio was a great addition to the Office family and brought a significant amount of expertise with a great team. For me, it meant the only remaining independent and relatively horizontal Windows ISV was Adobe. Should we buy them too? Then we’d be our own ISV ecosystem. It just didn’t seem healthy.

    On the other hand, it was inevitable. The Windows ISV world wasn’t what it was, but why?

    Microsoft wasn’t standing still while this decline took place. We struggled to build both a coherent strategy and execute effectively. It wasn’t a shortage of strategy, rather it was a combination of several strategies that ended up failing to reinforce each other and ultimately weakened the overall company strategy.

    At the core was Microsoft’s collective response to the internet platform—the browser and the server programming model. The Windows team fully embraced the browser as the future platform, pioneering and advancing HTML along the way. In the late 1990s, Windows even redesigned the Windows desktop to integrate HTML and browser technologies on to the surface of the desktop. Still there was no bridge between Windows and the web platform. Windows simply became a place to run a browser. In Windows 7, we finally took a step of relying on Windows itself for browser features when we integrated DirectX graphics that enhanced animation, video, and overall browsing performance. That came after Microsoft ceded the dominance it earned in an earlier era, unfortunately.

    This entire time, the Windows desktop API—Win32—went under-nourished, so to speak. It did not really matter because anything done in a new release of Windows would be ignored by ISVs unless it also worked on older versions of Windows. Enterprise PCs continued to run older versions of Windows longer than even the 3-to-5-year upgrade cycle of Windows and PCs were lasting longer as well. No ISV was willing to invest in writing code specific to a new version of Windows for what amounted to tiny marginal gains in functionality or features. Some teams released new features that ran on the installed base of older Windows, further reducing the perceived value of a Windows upgrade. This cycle of distributing new features for the old Windows installed base increased the complexity and fragility of existing PCs by designing implementations of system-level features to work in the context of multiple versions of Windows. It was a mess. Windows had turned into a distribution vehicle for features without a coherent platform strategy.

    The Windows Server team faced its own API battle distinct from the client side. In the 1990s the team developed high-performance and scalable web server capabilities such as Internet Information Server. This platform handily won the enterprise market that was buying servers to host web sites along with Oracle databases, Microsoft Exchange email, and corporate file servers. There was an entire new software industry built on servers running Windows that rivaled or even exceeded the size of the old end-user software industry. Companies such as SAP, Siebel, VMWare, EMC, and more rose from this era. These companies were really the platforms of the server era as most also supported Unix, Linux, or even IBM mainframes.

    The ISVs and developers of this era were also mostly inside of corporations or consultants, either independent or at shops of all sizes. Their business was in connecting and customizing these new server platforms one customer at time. While Windows along with a database (such as Microsoft’s SQL Server or Oracle) could easily serve web pages, the real business was in building a commerce site, a customer facing catalog, or internal corporate portal and connecting that site to internal line of business systems such as SAP. Windows wasn’t much of a platform for that.

    The Developer division built that platform. The origin of .NET was to provide the platform and abstractions for the web application landscape.

    Arising during the fierce competition for mindshare with the Java language from Sun, .NET was Microsoft’s own answer to “middleware” or software that was neither an application nor an operating system. The success of .NET proved that a platform does not have to be an operating system but can be anything that developers bet their business on—something we in Office had been saying about Word and Excel for ages. The .NET platform provided for the new world of browser and web server computing what Visual Basic and SQL provided for the Win32 client-server era.

    This strategy and execution were incredible. The dominance in the enterprise customer segment prior to the cloud era was unparalleled. The emergence of cloud computing, however, posed new challenges. What had been an annoyance competitively had become the platform of the cloud. The tools affectionately known as LAMP, representing Linux, Apache, MySQL, PHP/Perl/Python and their progeny, defined how the new enterprise public cloud would be built. These were the building blocks of applications built on Amazon AWS as well. Almost as fast as .NET rose to prominence it faded to be the installed base. There’s a footnote to this in that many of the .NET-based systems built and deployed in the early 2000s were mission critical business systems that will be used, maintained, and enhanced for a decade or more.

    The story did not end there because in parallel with the rise of .NET for servers, the Developer Division, tasked with battling Java, also had to compete with the use of Java on the desktop. There was ample evidence that the raison d'être for Java on the desktop was doomed to failure, the mantra “write once, run anywhere” meaning that Java programs could work on any type of desktop such as any version of Windows, Mac, or Linux. That was not reason enough to let it go.

    As a result, there was a wave of products designed to bring .NET to the desktop to write Windows programs. The languishing Win32 API was no longer receiving much support or innovation from the Developer division. My old product, C++ and MFC the preferred choice of professional ISVs using Win32, was relegated to maintenance mode. Instead, a new and constantly expanding set of technologies from the Developer division were evangelized as the preferred way to write professional Windows software. The names and differences between them are not entirely critical and my intent is not to start a debate over the relative prominence, target customer, or capability of any one of these. Some of the new desktop technologies included: WinForms, Forms³, WPF/XAML, Silverlight, and others that were less visible or more oriented towards application internals.

    A defining characteristic of each of the .NET technologies was a strong relationship to Microsoft’s programming language developed for the .NET server platform, C#, pronounced see sharp. This language was designed by the highly regarded language designer Anders Hejlsberg (AndersH) at Microsoft. He was previously at Borland which had licensed his product that became the legendary Turbo Pascal. The use of C# was a dream for BillG who always wanted Microsoft to develop and own a proprietary programming language implementation.

    The combination of the C# language and the above growing list of technologies would together be known, confusingly, as .NET. While it made sense from a marketing perspective to have a single umbrella term, it meant for confusing discussions with developers. Were we talking about a language or the API? Were we talking about programming the desktop or the server? Which client API specifically, since they were both discrete and overlapped? This mattered because few developers used the whole of .NET and so when engaging with them narrowing it down mattered.

    Even with .NET branding, owning a language was not enough. In fact, as I argued with BillG going way back to C++, a proprietary language was neither necessary nor sufficient strategically. The only thing that mattered in a platform battle was owning the APIs and runtime. Languages are both easy to copy and easy to replace, but the use of APIs in code developers write lasts a really long time. The “P” in LAMP was showing how many languages were already native to cloud computing. As we see today, the poly-lingual era of computing is the norm for most large systems. Enterprises did not like this, nor did the consulting and ISV world that was well paid by enterprises. They wanted a homogeneous C# world they could leverage across projects. The Developer division strategy fed that need to the exclusion of building quality Windows applications.

    The other defining characteristic of the new Developer division desktop technologies was the weak or lack of a relationship to Win32. These technologies certainly ran on Windows, but they were not Windows itself and did not have the performance or quality of Win32-based, or what we called native, applications. They were layered on top of Windows architecturally. The strategy was specifically to evolve these technologies separate from Windows so they could be provided across Windows versions and because the implementation was specifically designed to avoid interfering with Windows.

    Building .NET this way ensured that the Developer division was not beholden to the slowing and unpredictable Windows ship schedule. The rift between Windows and Developer grew significantly from 2004-2006 as the Longhorn/Vista project progressed. Then, Windows leadership effectively banned .NET from Longhorn because of performance and memory management issues. The Windows team made a rule that no applications written with .NET would ship with Windows Vista.

    This infuriated the broad developer community as it seemed like a classic case of “do as I say, not as I do” after years of evangelizing that .NET was the future of Windows. There was a strong sense of betrayal among the .NET community outside of Microsoft, particularly those developing desktop applications. This put them on the defensive, constantly on the lookout for signs that Microsoft was backing down from support of their preferred .NET technology. One member of the .NET developer community consistently checked pre-release builds of Windows Vista creating a nutrition label of sorts and concluded that no more than 4% of Vista used .NET code. That was substantially different than the promise of a fully .NET operating system they believe Microsoft promised them.

    Managing an ecosystem comes with a responsibility to maintain a consistency of messaging as well as to avoid undermining the financial investments people are making in that ecosystem.

    Windows Server did not see .NET this same way. In fact, Windows Server fully embraced .NET and it became an integral part of the value proposition. Why was there such a difference when only a courtyard separated the teams? .NET was the Server platform, plain and simple. While there were some developers writing native code to the networking, storage, or kernel layers these were the rough equivalents of those writing device drivers for the desktop. They were an important part of the ecosystem, but a very small community. This would have been the right way to separate the different API layers on the desktop, but that was not the approach taken.

    Instead, a deep schism was created between the Win32 and .NET APIs on the desktop. This schism went even further than that because .NET itself had non-overlapping technologies where any given development project would likely only pick one, for example Silverlight or WinForms, creating advocates for one part of .NET over another even as the evangelists painted a picture of a .NET family.

    There were also various Win32 APIs that weren’t clearly part of the formal definition of Windows, such as DirectX graphics, but were used by different parts of .NET in varying ways. Often .NET would use a technology in Windows, which seemed good on the face of it. Diving in one would find .NET used a subset of Windows that precluded using the full capabilities from within a .NET program. It was super confusing. Such details were often lost in the fervent online discussions and zealotry that surrounding this debate. I understand as I write the preceding that people would want to even debate the merits of these opinions or offer examples of how one could indeed mix and match API sets. Section 091 describes such “stupid developer tricks.”

    While the bulk of the world of software developers were moving to or had already moved to the browser, those within the Microsoft sphere of influence were debating the merits of their preferred way to write desktop apps for Windows, so long as it was using C# and not using Win32. There would be few to no pure-play commercial software products for sale that used .NET, other than some tools to support specific hardware or other support products. The Zune desktop app was often used as the example from Microsoft. A rewrite of Visual Studio itself was often discussed in this vein as well.

    For the bulk of world’s developers, there was only one thing holding back the browser and HTML-only world so many developers desired, aside from bugs and inconsistencies between browser vendors and versions. Surprisingly, that was Adobe. Adobe acquired Macromedia Flash in 2005. Flash had become the way to bring the power of desktop computing to run inside the browser. It crushed Microsoft’s attempt as previously discussed, called ActiveX. Primarily it won over the world of streaming video having been chosen by YouTube and also the advertising world where Flash was routinely used for obnoxious and intrusive flashing and flyover advertisements, browsing interiors of cars, and movie clips.

    Recall that Silverlight, in the 2010 timeframe, had originally been Microsoft’s answer to compete with Flash but had been repurposed to be the API for Windows Phone 7 and subsequently Windows Phone 8. Silverlight was related to the whole .NET umbrella term but came with its own set of constraints. It had little to do with our Windows platform. Silverlight was non-apologetic about running on not just old versions of Windows but also Mac and soon smartphones. Microsoft’s evangelism kicked into high gear for Silverlight. Many Microsofties fondly remember Netflix using Silverlight in the browser in the early days of Netflix.

    Adobe Flash would hit a big speedbump, or more correctly a brick wall, along the way. Apple. The iPhone never supported Flash. Many thought this would doom the iPhone as Flash had become so synonymous with the fun part of the web and the economics of the web. The iPhone team worked with Google and YouTube to create an iOS app that played YouTube videos. That was not only the right solution for customers but made clear a deep philosophical point held by Steve Jobs—operating systems are a “thing” not a set of “layers” and to build a good operating system one does not layer one operating system-like technology on top of another.

    This view would be seen as somewhat contrarian in the halls of Microsoft since both Silverlight depended on such layers as did the rest of .NET. Outside of Microsoft, people would perceive Apple’s stubbornness about Flash as simply Steve Jobs trying to control things. It was obviously frustrating for Apple. It got to the point that Apple and Steve Jobs needed to say something. The reason was that the iPad was about to hit the streets in mid-2010 and the tech press was hounding Apple over the need to support Flash on such a large screen browser as the iPad if it were to be a true computer and not a phone. It didn’t matter that Adobe never delivered Flash to any phone, as Jobs pointed out, or that on Netbooks Flash ran so poorly as to be useless. They wanted Steve Jobs to fix this.

    Steve had a better idea. On April 29, 2010, commensurate with US availability of the iPad, Apple published a missive Thoughts on Flash signed by Steve Jobs. I thought it was brilliant. To this day I believe it was the most concise expression of excellent product strategy and appreciation of great engineering that any CEO anywhere had written during the PC era.

    Jobs laid out one of the clearest and most well-articulated strategies for why Flash was not a good idea on the iPhone, or frankly, ever. In terms of strategic clarity in general this remains a standard by which to judge any strategy. He dismantled Flash in fewer than 1,700 words, detailing that Flash was closed, not really part of the web, performed poorly relative to reliability, security, and performance, had poor battery life, did not work with touch, and, most importantly, the whole concept of third-party runtimes is bad for everyone. Was it self-serving? Sure. Apple wanted people to build Apps for the iPhone using Apple’s proprietary tools and platform and everyone knew it. But Jobs wasn’t wrong. In fact, he was precisely right on the negatives, which killed everyone to admit. This was the technical buzzsaw at its finest.

    As I read the polemic, I found myself nodding in violent agreement. The current choices we faced in building Windows 8 found us in the middle of the various .NET runtimes each of which shared shortcomings and limitations with Flash. The Jobs memo also took me back to my first product, Microsoft Foundation Classes, when we lived through making all the mistakes of building a C++ variant of a third-party runtime. We discarded the “Old AFX” project after explaining to BillG why what we built in 1991 was so big, slow, and unlike Windows. Everything about Thoughts on Flash resonated. I loved it.

    I quickly circulated Thoughts on Flash and talked for a long time with Aleš Holeček (AlesH), by then a vice president leading the creation of the app platform for Windows 8 as the engineering leader of DEVX, the platform counterpart to the UEX user experience team. He had firsthand experience in this problem area having worked on a Flash-like competitor in the early days of the web. We shared a lot of nodding when discussing the Jobs memo.

    What Jobs captured so well was that modern computing had moved in a new direction. The idea of a modern computing platform hosting another platform on top of it while that platform essentially emulated, but differently, the underlying native platform was patently dumb. It was a guaranteed way to drag down the whole of the experience across the gamut of reliability, performance, touch, battery life, privacy, reliability, and so on. For certain Jobs’s memo was a full assault on Flash, but it was the whole concept of building platforms on top of platforms that was the problem.

    The Developer division did not see the situation the same way.

    The tensions and divisions that had been growing between Windows and Developer since the arrival of .NET in 2001 were reaching a breaking point, not just within Microsoft but the developer community as well. From their perspective, Windows 8 should make a big bet on the runtimes they had created just as they had been promised previously.

    We were well-aware of this desire and the existing schism in developing Windows 8. JulieLar and I both were rooted in our own experience in the Developer division in the early 1990s. PJ Hough (PJHough) who led the amazing cross-group partnership between Office and Visual Basic to bring VBA to all of Office was now a leader in the division. Our connections were strong. There was a good deal of positive history.

    Similar experiences can yield similar engineering results and like Apple we had early in the Windows 8 design phase concluded that Windows needed a new platform, not yet another set of libraries on top of Windows like Flash. The Windows 8 platform needed to be Windows 8 itself, not a library floating on top. We were also determined to build first party apps the same way we were going to seek out developers to build their apps. This “eating our own dogfood” would be a critical validation of the platform. The tools and platform would support a new set of modern needs for Windows 8, paraphrasing from pre-release developer documentation defining the platform attributes:

    * It is written to take advantage of the Metro-style immersive experience:

    * It provides a UI / look & feel that is compliant with Metro-style including the app bar, animations, typography, layout, and so on.

    * It supports modern startup, shutdown, and hibernation, windowing, and process lifetime management.

    * It supports app contracts. App contracts define capabilities apps can follow to participate in Windows controlled UI experiences, such as: live tiles, snap view, splash screen, search, share, notifications, file picking, app settings, connect.

    * It runs in an Application Container for security and is completely isolated from other apps.

    * It is only delivered through the new deployment technology, the new App Store.

    * It is installed per user, not per machine.

    * It installs and uninstalls cleanly leaving nothing behind at all.

    * It is impossible for an app change the state of the OS in an irreversible way.

    Nevertheless, at our unveiling of the user experience in June 2011 we ran right into this sensitivity and brought the full wrath of the .NET community. It was just the omission of a couple of words and a lack of appropriate deference to .NET that set us up for a good deal of pain. In the course of the demonstration, with both Julie and I on stage, we said and echoed each other about a key aspect of the new Metro-style apps.

    We said these beautiful, secure, easy-to-build, safe, reliable apps could be built using “our new platform with HTML and JavaScript.” That was it.

    It was just a reflex. It was in fact the news. Microsoft embracing the world’s most popular and widely used programming languages for its flagship product was a big deal. It was a key part of how we would make building Windows 8 apps easier and more accessible to legions of developers. It was also differentiated relative to Apple’s proprietary and un-loved Objective-C and Android’s Java that wasn’t living up to the hype. As if to pour gasoline on the fire, JensenH in his video viewed by millions said the same thing. Steve Ballmer even waded accidently into the fray simply repeating the statements made at the D Conference demo.

    The problem was we inadvertently sent a message to a very sensitive and on-edge community that we had jettisoned .NET and finally made good on what the Vista team had essentially done by fiat. In an ArsTechnica article documenting the assault on .NET the headline ran “Why Microsoft has made developers horrified about coding for Windows 8” and was so indignant it even quoted an extraneous “uhh” in Julie’s statement, “our new developer platform, which is, uhh, it's based on HTML5 and JavaScript.”

    This was a problem of our own making. The strength of this reaction was testament to our incredible evangelism efforts—the success of which greatly outpaced the impact of these technologies in the market. We knew the data from developer surveys and what people were using. We knew what programs were running on Windows PCs through our telemetry. We knew what technologies the new generation of cloud companies were using, and the world’s universities were teaching. We knew that within the Microsoft bubble .NET was everything. While that is a big bubble, it was small relative to the whole universe especially with the rise of mobile platforms. We used to joke about the Developer division surveys that always showed dominance of .NET because of the well-known Microsoft echo chamber or bubble—the survey results said “100% of .NET developers use .NET.”

    There were many frustrations with our statement at the D Conference expressed through countless blogs and endless discussions about this transgression. Most began with the notion that HTML and JavaScript were immature and lacked the horsepower to serve as the foundation for great applications. Rooted in this commentary was a disdain for what were routinely dismissed as “script kiddies” or those calling themselves programmers who merely created static web pages or simply modified a few lines of a web pages and claimed expertise. This was from another era and reminded me of the age old “Real Programmers” meme from long before there were memes. The truth is by 2010 real programmers did use HTML and JavaScript. In fact, they were the most used, the most taught, and the most actively developed languages. Taking a buzzsaw to HTML was going to be a losing proposition, no matter the technical merits programming language experts might debate.

    The main issue, however, was where this left all the other Microsoft technologies. Disrespecting HTML and JavaScript was simply smoke around the .NET fire. Specifically, where did we see Silverlight and its more feature laden parent WPF and the language C# fitting in. Developers were apoplectic about their investment in Microsoft .NET technologies not moving forward with Windows 8, whether those were WinForms, Silverlight, WPF, C#, or even C++.

    Silverlight/.NET expert Mike James wrote, in “Dumping .NET - Microsoft's Madness” that if “the future of Windows apps is HTML5/JavaScript then Windows has no future.” His post went on to explain his idea that Silverlight is Windows, it just runs inside a browser. He also explained how .NET improved the Windows API and making it more readily programmable.

    Our choice was to engage at the time without pre-release of code to test or validate what we were saying or wait it out for a Seattle summer when we would have code available. Obviously given the situation the concerned developer community would doubt every word we wrote. We knew once the pre-release code was distributed, we would have to answer these festering questions but could do so pointing to the product. I chose the latter and waited for our developer conference scheduled for September 2011. Engaging online discussions with an asymmetry of information in the context of such deep concerns would be a frustrating exercise for everyone.

    Tim Anderson wrote in his blog, “Microsoft Refuses to Comment As .Net Developers Fret About Windows 8” and said, “In fact, it is bewildering that Microsoft is being so careless with this critical part of its platform, even if this turns out to be more to do with communication than technical factors.”

    Finally, we could always count on Mary Jo Foley of ZDNet to catch a leak from the Developer division and her sources clearly indicated “Microsoft needs to tell Windows 8 developers now about 'Jupiter' and Silverlight.” Jupiter was the code name of the other part of our Windows 8 developer strategy—to use C#, WPF, and XAML. The article continued “Microsoft's longer-term goal is to align its various developer stacks, giving it a story that's comparable to Apple's. Because Apple supporting iOS on tablets and phones, Apple developers can write once and have their apps run in both places with relatively little modification.” Just as many of the blogs stated, this article also concluded “Like many devs I've heard from, I don't believe Microsoft can't [can] afford to wait three more months to let its developer base know what its intentions are. So far, however, ill-advised silence seems to be the Softies' plan.…”

    We had a plan. We had a new platform. It addressed these concerns. We also knew that if we started answering questions now, we’d end up in theoretical debates from June through the developer conference in September. The conference, a rebranding of the PDC to //build, would provide code, tools, and answers to these questions. I thought.

    Off to //build to share the platform we built.

    On to 104. //build It and They Will Come (Hopefully)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • A challenge that comes with writing down experiences occurs when writing about events that readers lived through, have strong opinions about, and feel they know the full story. My purpose here is to share what we were thinking and doing at the time, how a broad set of people reacted, and then to offer my views of the reasons leading to the results. That is a way of saying this section is going to start with what we set out to do, not where we ended.

    Back to 101. Reimagining Windows from the Chipset to the Experience: The Chipset [Ch. XV]

    We set out to reimagine Windows, but it is interesting to ask what exactly the product is to most people. This helps us to understand the thoughts behind making a major change. We saw this in the changes to Office—we recognized the value of Office was not as many might have thought, the file formats or the old File|Edit|View|Tools|Window|Help menu structure, but rather the inherent capabilities and implementation of those capabilities. In that spirit, we looked at Windows and saw much more than the specifics of the Start menu or any particular expression of a user interface. Windows at its core proved to be a remarkably resilient operating system and our goal was to tap that resiliency to bring it new capabilities for a new world of customers.

    Fundamentally an operating system can be thought of as the software that allocates hardware resources, manages connectivity and devices, and defines the human to device interaction model. Beyond these technology distinctions Windows is also a culture of openness to developers and an ecosystem of partners that itself has proven resilient over time. Windows is no more one of those than a single feature or attribute above others.

    As both a participant in and later a contributor to the evolution of Windows, I find the transitions that the Windows product has gone through to be a case study in the “soft” part of software and in the flexibility of the product team to engineer transitions from one evolutionary stage to another. Consider just some of the transitions the Windows OS kernel or core operating system have undergone:

    * GUI transition. Windows itself was the product of a transition that many doubted could be made. Could an entire OS be built upon the "shaky" underpinnings of MS-DOS in the face of Macintosh? A remarkable amount of work went into the technology across the ecosystem, such support for 32-bit microprocessors, that made Windows 3.0 such a watershed product. Yet underlying that, customers could still bring forward investments in all those applications and peripherals from MS-DOS.

    * 32-bit transition. The transition to 32-bits was one that required a vast amount of change and brought with it the introduction of "plug and play" and the ability to run more sophisticated graphical applications and games of unrivaled qualities. The introduction of Windows 95 was a watershed moment for the whole industry. While in hindsight it looks as though it was a sure thing, many at the time proclaimed it would be technically impossible.

    * Internet transition. Immediately after the release of Windows 95, the conventional wisdom quickly became that Windows would be replaced by a browser. Yet few would have argued with the fact that the presence of Windows—the support for networking, the introduction of graphical web browsers on Windows, as well as the openness of the platform all contributed to the transformation of the world of information technology. In 2010 we were just starting to see how the powerful graphics of Windows could bring standards based HTML5 to life in unprecedented ways. As we will see in the next section, the programming model and API of Windows was indeed still struggling.

    * Server Scale. As we continued to evolve the client (workstation) OS we were using this same OS foundation to power the datacenter. With Windows Server we scaled the OS to support hundreds of computing cores and terabytes of storage. And along the way we created this OS for multiple CPU architectures driven by the demands of server computing (Alpha, MIPS, Itanium, and 64-bit). At each step most people believed that such flexibility was neither prudent nor possible.

    * Security and Reliability. Through the above transitions, there was an undercurrent that Windows was "aging" and that it could not transition to modern computing needs of much larger memory architectures, multi-core OS support, improved reliability, and much better security. The introduction of Windows XP was a milestone in bringing our enterprise server and workstation OS to mainstream consumers. Ironically, at the introduction of Windows XP many thought we had reached too far, and that the OS was more than people would need or could even afford to power. Throughout this transition the introduction of Windows Update enabled the OS to stay connected to customers, provide code updates, and distribute software on behalf of the ecosystem.

    These transitions were supported by consistently strong engineering efforts to refactor, rearchitect, and re-tune the Windows operating system. Windows also showed a broad set of efforts at the experience or user interface layer in the system as well. The various editions of including Windows Home, Windows Professional, and Windows Enterprise while often viewed as pricing and licensing efforts did in practice introduce a wide range of functionality tuned to different customer segments. Windows was able to reach up and down in complexity using the resiliency and flexibility inherent in the architecture.

    Yet, at the experience level Windows also struggled to achieve critical mass in several important areas even with the capabilities of the team and assets in the code. In Hardcore Software, we’ve seen the difficulties of creating tablet computers (Tablet PC), handheld computers (Windows CE), home entertainment (Windows Media Center), industrial devices (Windows Embedded), and most recently the ongoing development of smartphones (Windows Phone 7.)

    What is it about the experience layer that has proved so difficult for Microsoft? There could be many possible explanations. Was Microsoft too early when it should have been patient? Was Windows code simply the wrong starting point? Did we bring out good products but had inferior marketing, sales, and distribution? Some would point to one of more of these.

    I had my own theory and plan for trying something different.

    My view was that Microsoft relied too heavily on the notion of architectural layering and believed that experience could always be layered “on top” of the operating system. This computer science view drove nearly every discussion on how things should move forward. It was viewed as they key to architecting Windows for these different experiences while maintaining the shared code of the actual operating system.

    Microsoft systems culture (aka BillG) loved to believe that with the right architecture things in one layer could advance independent of another. I don’t think I could begin to enumerate all the times we debated issues that boiled down to me suggesting the abstractions are in the way while Bill insisted that great architecture would solve the problem. The counter was that I was failing to consider building a great architecture before addressing the problem. I was merely suggesting that we might end up spending 90% of the time on 10% of the problem.

    In my heart I am an “apps person” as I’ve noted many times. One thing that is conceptually different about apps from operating systems is that apps tend to be much less hardcore (or religious as some might say) about architectural layers while much more zealous about solving the customer problem even if that means breaking through well-considered abstraction layers. In an operating system there’s a general tendency to view the layers and architecture as goals the system must achieve and moving forward the layers enable advances and are deeply respected. Whereas in apps the layers and architecture tend to be the starting point while innovation and moving forward usually involve breaking those very abstractions.

    Each of the above attempts at expanding the experience of Windows were implemented with the explicit goal and architectural starting point of not interfering with the core of Windows. Importantly, the teams executed without organizational alignment across the Windows team or schedule. The essence of the former COSD division was to serve the main missions of Windows for desktops and for servers but generally favored the server mission for cultural and historic reasons. Whereas the Client division served the multiple experiences of Windows but did not generally prioritize experiences beyond the main client. To be clear I am not saying this was wrong at the time, rather it introduced tradeoffs that had side effects relative to other missions.

    This to me was the kind of decision made early in a project that is difficult to work around. The inability of the Tablet PC to fully embrace native Windows implementations or for Windows Phone to tap into the available power of the Windows operating system support for devices and connectivity were examples I’d personally experienced. As an apps person in the “two gardens” sense, the same way we needed Word and Excel to share code and align around everything from the use of the Windows registry to HTML to drawing code to user interface, we needed Windows to align around these expansions and changes to the experience.

    Windows was making a tradeoff at each of the junctures. There were two higher priority advances that needed to happen. First the work to transition from MS-DOS and 16-bit Windows to the full 32-bit operating system was an imperative that could not be compromised. Second, the need to continue to scale Windows up for the data center and to have a consistent client-server operating system was architecturally key. These were long term goals that were put in place and executed over at least two decades as discussed in 011. A Strategy for the ‘90s: Windows.

    By 2010 we had achieved those goals. The achievement of those goals came, arguably, at the expense of losing out on tablets, phone, internet-scale cloud, and embedded devices of all kinds. Further, as we will see in the next chapter, we failed to align significantly in our developer tools and messaging about the Windows experience platform, further weakening our strategy vis-à-vis these new opportunities for software and devices.

    When announcing the transition Windows was making to support SoC and ARM the resiliency of Windows was top of mind. So much so that the working name of the event was titled around “resiliency” and that was the script I prepared though that was rightly deemed defensive.

    The event stopped at the chip layer, intentionally. I knew the history and what we had been deliberate about in planning Windows 8. Windows 8 was not going to simply stick to the architectural layers that were in place. This transition was much bigger and needed a holistic and consistent approach up and down the layers. We were specifically not going to bolt a new experience on top of Windows any more than Windows 95 bolted a GUI on top of MS-DOS as some might have said we did. If all we did was port Windows to ARM, then literally nothing would have changed, and the outcome would have been no better than porting to DEC Alpha or MIPS.

    For Windows 8 to succeed and reimagining Windows for a new era we would need to have a new experience that matched a new operating system kernel combined with new capabilities for partners and developers.

    With that goal in mind, we set out to rethink the computer that we all used each day. We intended to build on the resiliency of Windows, not to simply go all in on tablets, but to support the attributes customers were already seeing in tablets and smartphones with a wide variety of computers of all shapes and sizes. This was not a fantasy or a “leverage Windows” approach lacking in engineering reality. In fact, it was exactly what Apple did to bring the iPhone and then iPad to market, building on the resiliency of their approach only with a much more explicit end-to-end implementation. Apple even had a server product just as Microsoft did, though ceased to offer it as a separate product in 2011. Android did not take this fully integrated approach to the operating system and scenarios and the result we were seeing in market was showing the weaknesses of Android when it came to customer experience across different devices.

    While there was no escaping the tablet focus of the world beyond our hallways, our goal for Windows was to embrace new paradigms and bring Windows forward. Even at the unveiling of support for ARM, we tried to be clear that we saw a convergence of devices not a distinct world of devices, no matter how much they might work together. Whereas Steve Jobs claimed that the iPad was an in between a phone and a Mac and had something special to do, we saw devices as a continuum. Specifically, Apple saw three discrete devices and we believed that the tablet and the PC could be a single device, a new kind of device. We didn’t see this as a hybrid or an inefficient bundle, but engineering and design problems to be solved.

    Where Apple, even to this day, sees as laptop as something without touch capabilities and a tablet as something with less than optimal “multi-tasking,” we saw devices as sharing metaphors and paradigms the way we saw desktops and servers successfully sharing paradigms. I viewed Apple’s line in the sand about the iPad as stubborn, especially considering the role of something as obvious as a keyboard, never mind plugging in a USB drive or external camera. Apple did not even introduce their own keyboard for the iPad until 2015 and a trackpad came in 2020.

    As I said, though in not so many words, at the CES 2011 event; the advantages Windows provided are the ones that have brought success and customers to so many:

    * Broad opportunities for developers

    * Openness of the software platform

    * Breadth of devices and peripherals

    These continued to be the hallmarks of Windows as we designed the Windows 8 experience.

    The Windows 8 Experience

    Previously when describing the Office 2007 user experience, I shared that the team had developed a set of tools supported by a set of principles, rather than simply coming up with new variants of the various widgets. The Windows 8 program management team, again led by Jensen Harris (JensenH) reporting to Julie Larson-Green (JulieLar) took this same approach. Sam Moreau (SMoreau) led the Windows design team joined by the incredible Clay Satterfield (ClaySatt) who had worked on the Ribbon with Jensen, and Jeff Johnson (JeffJo) led the software development team. Given the breadth of this effort it feels awkward to call out only a few, but at the same time these members of the team were the force behind the work.

    When something new is introduced as a replacement for a standard, there’s a tendency to look at what is different and to believe in a “whole new cloth” approach to design. Very few products are like that. In fact, most new products that achieve greatness or ubiquity are a reassembly of what came before combined with new elements at the right time. Tesla did not make the first car, the first car with a screen, or even the first electric car. It did recombine many elements and some new elements, to change everything about how we thought about cars. The same could be said about the iPhone, which recombined the cell-phone network, the internet, music players (which Apple had previously reinvented), and even the graphical interface (by adding touch.) Likewise, we have talked about how Microsoft Office was not the first suite of products nor was it the first spreadsheet, word processor, or graphics but a new and unique combination of parts. The common thread is that at each juncture the new product was also substantially different from the old, established leader, and diverged from where the experts thought the market was heading. That is innovation.

    Observing this across many fields and having executed on these sorts of changes is what led to the principles behind the reimagination and approach in building Windows 8. The Windows 8 principles infused the design of the entire product, from packaging to advertising, and more:

    * Do more with less. Windows simply has too many ways to do the same thing, and too often too many things required to do one thing. The Start menu is there to launch programs, but you can’t switch between running programs that way and all you can see about a program is the name. Then there’s the taskbar where you can switch between running things, find recently opened files (sometimes), and even execute some commands for an app. Then there’s the notification tray on the taskbar, which is sometimes used to launch programs, sometimes used to tell you innocuous information, and sometimes used to control important features like the sound volume or Wi-Fi. None of those were particularly wrong on their own, but they were not built and conceived with the goal of simplicity or consistency in approach. That is not meant to take away from success or familiarity, but it does limit the expansion of the platform over time, especially when faced with a new way of working that does do more with less.

    * Authentically digital. We can trace the history of how a computer looks to the evolution of display technologies themselves. The first Windows interfaces were limited to very low-resolution character screens and so by nature looked essentially like mainframes. As computer screens could display more colors, but not that many, we ended up with themes like Windows Hotdog Stand with bright red and canary yellow. Everything was a dreary grey, Redmond Grey, in Windows 3.0. By Windows XP and Windows Vista, PC graphics had reached a point where drop shadows and then translucency were commonplace and filled the interface. Famously, Macintosh and then iPhone built on the desktop metaphor but literally. How many users of Macintosh in 2010 still used paper files and manilla folders, or kept addresses in a leather-bound address book? The Office save icon is one thing, but it wasn’t as though Word had fired off a ding at the end of every line or required the use of a correction tape brush to backspace. Authentically digital was a principle that said to embrace being a computer, minimizing the visually tiring overhead that comes from trying to emulate a physical world. This came through from typography, colors, animations, and making room for the content that the user cares about, not the user interface we or app developers care about.

    * Pride in craftsmanship. Jensen and team were maniacs when it came to details. Every detail mattered: the text of every error message, the spacing of every word, the ordering of animations, and more. This pride was infused across the whole team from the start. Bugs that would have normally been postponed were considered, reconsidered, and not simply punted. In a talk he gave just after product availability, Jensen said “It's about caring about every, every, every, every, every, every, every, every, every, every detail. Because if you don't, who will? That doesn't mean you're always going to win. Sometimes you lose. Sometimes you fight for something and it's technically not possible or you're out of time or something else. But you should care, and you should feel ashamed when something, doesn't work out.”

    * Fast and fluid. It was no surprise that the PC had become slow but worse it had become unpredictable. One of the main priorities in designing the new user experience was designing it such that everything was predictable, everything was interruptible, and everything that took time would let you know if there was real progress being made. These engineering challenges were inherent in the design and architecture of Windows itself. Touch interface must be vastly more responsive than a mouse and keyboard. Whether typing or swiping, your fingers can see even the slightest difference in milliseconds compared to what happens with a mouse. In this case the hardware became increasingly important as we will see in subsequent sections. There was much we could do in the experience, but this too was a challenge from the bottom to the top of Windows 8. Striving for fast and fluid was a requirement for touch and made a huge positive impact on mouse and keyboard usage.

    * Win as one. No product the scale of Windows could be or would ever be designed by one person, but great designs combined with great executions tend to have a feeling that there was one point of view. In overcoming the architectural boundaries between the layers of Windows and development tools we would consistently assert the need to win as one.

    With these principles in mind, I wanted to describe the main user experience affordances in Windows 8 that worked across all form factors and on ARM or x86 hardware. From the beginning of the project, we assumed Windows 8 would run on the broadest range of hardware from 7-10” ARM tablets to traditional laptops to all-in-one desktops to all-out gamer or creative rigs with multiple monitors and discrete GPUs. We self-hosted all of these across the team the entire project.

    The affordances described below—the Metro-style interface—included the following elements: the Start screen, Live tiles, the touch experience, charms, and the classic Windows desktop. The next section covers the development of Metro-style applications and even more of the capabilities available in the experience.

    While I will summarize these here, there is a vast amount of reading in the Building Windows 8 blog especially the first post on the user experience. I’ve made these available on the Internet Archive project as they moved around quite a bit on the old Microsoft site and sometimes the images were lost.

    The first thing everyone saw in Windows 8 was the new Start screen. Technically, many would see the new boot screen first when powering on a Windows 8 device, which as part of our pride in craft was an area of intense work through the whole ecosystem including partners. The Start screen was from the start (a pun) a massive change with a clear intent to do more with less.

    The Start screen was a radical simplification of the combination of the Start menu, system tray, gadgets, and taskbar. Rather than a finnicky and difficult to navigate menu haphazardly organized into some folders and some icons, you’d see a clear place to click (or press as we’ll see) and launch or switch to an application. It didn’t matter whether the application was previously running or not.

    Combining launching and switching was long overdue. In the early days of a PC, knowing which programs were running or not mattered. It was an under-the-hood detail that surfaced to end-users who did not really understand why but had to concern themselves with running out of memory or slowing their computer down due to running programs.

    With Windows 7 the taskbar was enhanced to provide a way to launch programs the first time and to switch to them after running. Users did so by pinning the application icon to the taskbar. We saw in the telemetry of the product over time that most users were pinning most of the applications they used and the frequency of launching programs from the traditional Start menu location dropped. Beyond that the Start menu became a repository for installed applications spamming user as most applications installed a folder of accessories, top-level applications, prodded the user to pin the application to the taskbar while leaving a new shortcut to the application on the desktop.

    Typically, tech enthusiasts would pride themselves on vast collections of software installed on their PC. Often, they spent considerable time arranging, categorizing, and organizing their Start menus. This extreme in use cases is not uncommon in software and poses a design challenge. How does one design for a broad base case and an extreme case that often comes with substantial complexity for the base case.

    In practice, however, the design of the Start menu itself began to fail at both the extremes of smaller and more portable devices as well as the increased use of larger monitors with high resolution running greater than the 1080P of high-definition TV. The Start menu had a sweet spot of screen size and resolution that was common on desktops almost a decade earlier. As typical PCs loaded ever more software while not dramatically increasing screen size, even the out of box experience of the Start menu turned into a dexterity test of up, down, left, right navigation. New large screen monitors demonstrated the challenges of a menu design even further as the pixel-distances to move to the small targets of the Start menu increased.

    The trend was clear which was that screens would soon improve beyond what a good deal of Windows software was capable of handling gracefully and that would not substantially improve.

    The Start screen “took over” the screen to provide a surface to do more (with less) while also acknowledging that a very small menu with a difficult navigation paradigm had hit a proverbial wall. The launch screens of phones and tablets were showing another way that rapidly became the most used experience for more people.

    Still, when I first saw the iPhone launch and the apps organized as an endless grid, my first reaction, speaking to JulieLar at the event, was to ponder how long it would take Apple to add folders. By iOS 4 in 2010, folders were added. That followed a period where people would endlessly futz to organize apps by personal folders, by App Store categories, or sometimes just by color. Then in iOS 15 in 2020 Apple added the App Library which was a way of list all installed apps alphabetically while providing for search and launch.

    Windows 8 would skip these intermediate steps. The design of the Start screen included folders, arranging apps in any preferred order, and an All Applications view. There was also a search command for apps. Additionally, when the Start screen was visible simply typing would invoke the search command. When Mark Zuckerberg, of Facebook, visited and I showed him Windows 8, this particular feature resonated with him. This was just before their pivot to mobile but at the time he pushed on launching the browser from there by just typing a URL. We considered that but were a bit reluctant to add more entry points to launch the “default” browser.

    Taking all these features together, we exceeded the capabilities of the combination of the Start menu and taskbar and did so by reducing the number of concepts required to understand.

    Located deep in the Windows 7 taskbar was a notification area, the third capability integrated into the Windows 8 Start screen. This area had become somewhat of a dumping ground for a wide-array of always available tools from the volume control to connecting to Bluetooth devices to a sort of mini-application that served to notify users of some important information such as stock quotes, weather, or other information sources. Windows Vista had also introduced gadgets which were an enhanced form of mini applications to show information such as news headlines and the like.

    We chose to significantly expand this capability with Live tiles. The easy to press launching and switching buttons could come to life with useful information available at a glance. We went a step further and made it easy for developers to add additional tiles for users of an application to pin additional information to the Start screen, called secondary tiles. This was a feature initially on Windows Phone 7 and enabled pinning a single contact or stock quote to the home screen. This capability made further customization of the Start screen possible, again something not possible with the combination of Start menu, taskbar, and notification area in any system-wide manner.

    The Start screen was naturally navigated with a mouse scroll wheel or a trackpad or using keyboard arrow and page keys. The Start screen itself was always available just as the Start menu was available, which was to press the Windows key on the keyboard. Pressing it a second time returned to whatever window was visible before. In this sense the Start Screen behaved exactly as the Start menu did, except it filled the whole screen. It was sort of like using a Netbook, though absent the dexterity required for a 10” screen.

    On hardware with a touch screen, the touch experience comes to life. Perhaps no other element garnered more attention as it was defining characteristic of the new tablet era and the implementation detail that Apple used to define the iPad as a distinct device from Mac. Our view was an expansion and decidedly a course correction from Windows 7. Touch was now central to the experience as we baked it in across the whole PC experience, rather than bolting it on. However, the touch experience was designed to succeed when it was required, be there when desired but not in the way otherwise, or not present at all if the hardware did not support it. Executing this design truly tested the win as one principle as everything from boot through setup through daily use needed to consider the presence of a touch-only computer. Even the Tablet PC assumed a keyboard would need to be plugged in at times.

    The team created what they called a “touch language” to describe how the design would feel to users. The touch language included:

    * Simple and consistent

    * Fast and fluid

    * Limited gesture set

    * Direct manipulation

    * Sliding gestures

    * Reversible gestures

    * Limit the use of timed gestures

    My intent here is to continue to focus on the product strategy and not detail every aspect of the product design, but it is important to show how the strategy was reflected in some of these attributes.

    Everyone wants their PC to be fast. As described in the principle fast and fluid, we aimed for a new level of performance. The change in hardware to high pixel count screens and higher screen refresh rates meant that a much smoother interface was possible. The introduction of touch created the imperative that the interface be much faster as well. The intimacy, and that is the right word, that one feels with a device using a touch interface is much deeper. The way your finger connects with user interface elements is not just about speed but about how objects move with your finger. The ecosystem liked to refer to this as a “sticks to your finger” level of quality. How swipe or tap or drag feels matters immensely. Engineering this experience involved all layers of the system and a deep understanding of how the elements of an applications—the buttons, lists, and icons on the screen—interact with content.

    When the mouse was pioneered and implemented in the first Xerox PARC computer it had three buttons. Initially the concept of chording (as in a piano) was a key part of the design. Paul Allen thought that was too complex and limited the PC mouse to two buttons. Steve Jobs thought even that was too complex and Macintosh had only a single button. That did not last too long as many were soon connecting mice designed for Windows or using a keyboard modifier to get to the commands exposed when clicking the right mouse button—Office began supporting right clicking in Office 4.x which was widely used on both Macintosh and Windows. When Apple introduced touch, Steve Jobs proudly defined the interface as having only a single finger. That did not really last either as soon gestures were developed for a host of advanced, but often necessary operations. These gestures were mirrored to a degree on the Mac trackpad.

    We knew from usage studies and telemetry in Windows, that the right click commands were frequently used. Beyond that, however, complex modifiers often went unknown by most users. We saw this with pen-based computing as well, where overloading the pen with complex gestures akin to those used by old-style proofreaders or combinations of clickable buttons on the pen were nearly impossible to discover.

    The Windows 8 touch language designed a finite and limited set of gestures that encompassed all we believed would be required to be highly productive in a touch environment. The design intentionally avoided complex chording gestures that are difficult to remember and awkward to implement.

    The entire touch language consisted of ten unique gestures, from simply tapping to perform an action to using a thumb and finger to zoom or rotate (that’s two gestures) to swiping from the left (to switch applications) the right (for system commands) or the top and bottom (that’s four of the gestures in total, for app-specific commands.) For an application developer, they only needed to worry about seven of the gestures, as three were only used within the operating system.

    The initial reaction to these gestures as being difficult to “memorize” was quite common among tech enthusiasts and tech press. What we saw in putting people in front of the user interface for the first time was a natural ability to experiment and to quickly master these basic gestures. People would simply tap and swipe around and as if by accident trip into the gestures or simply try something that seemed natural such as dragging, and it would work. Whenever we would say to the aforementioned techies that people discovered something by “accident,” they would counter that this was flawed logic, compared to being taught. Yet we had seen this same type of learning happen with right clicking the mouse. As if by accident, people discovered that the action led to a menu of the most important or relevant commands.

    Today, every mobile platform exposes functionality by swiping from edges or even corners.

    As we intended Windows 8 to be used for devices that were touch-only as well as no touchscreen at all, the gestures also had mouse and keyboard analogs. One of the more novel aspects of the mouse-centric execution was that we made use of the corners of the screen as touch equivalents for operations. The use of the corners was seen by some as obscure. In fact, at one point in the evolution of the product, we made a desktop background as a reminder of the corners, but mostly in gest. Why jest? There were two reasons. First, the corners were almost always found within a few minutes of mousing around experimenting. Second, to those in the design world the corners have special meaning as they are the easiest targets to reliably hit with a precision mouse—far more reliable that a single square button or even a menu command—by simply racing the mouse or moving on a trackpad without concern for stopping. The use of corners as hit targets is known as Fitts’s law, where the principles of the model essentially say the corners serve as infinitely large buttons. In 2021 JensenH made a video detailing the model which is linked to in the endnotes though we also spoke of it extensively in the e7 and b8 blog.

    The touch experience also provided for a robust on-screen typing experience. This typing experience included predictive autocorrect and support for a range of now standard on-screen typing features such as emoji and diacritics. Owing to both the PC heritage and a realization that touch gestures for very common commands such as Copy, Paste, and Select All would be tricky the touch keyboard included a Windows Ctrl to use those commands, arrow keys to quickly move left and right while touch-typing, and an option to display a full PC keyboard. Microsoft’s handwriting history also meant the on-screen keyboard could be used to recognize handwriting and insert text as though it was typed. No pen required! The first Windows 8 tablets came with wide screens and to facilitate typing a split keyboard was provided which could be operated with two thumbs.

    The combination of the touch language and the new Start screen were a significant simplification and consolidation of concepts without a loss of flexibility or power-user features. There are a couple of common examples that almost no typical Windows user knew were possible with the Windows 7 world that were addressed by the design. Few knew, for example, that the Start menu could be rearranged by bringing up the menu and then dragging and dropping the icons and folders, or that an icon could be placed in a folder while dragging. Most just lived with programs in the wrong order on a crowded and hierarchical menu. These power-user features were valued but so mysterious as to be known by a tiny sliver of the user base.

    There are many system-wide commands that have historically been obscured or awkwardly placed in Windows, or worse implemented repeatedly but differently across applications. The last element of the Metro-style interface was the Windows 8 charms which provided access to system commands. The name charms came about because the icons displayed for the system commands of Search, Share, Start, Devices, and Settings were reminiscent of the clovers, hearts, stars, and moons of Lucky Charms cereal.

    The center of Windows 8 were apps. With Windows 8 we aimed to do more than present a sea of apps, but to connect apps together easily. Most apps connected to each other by somewhat blunt constructs of the clipboard and files. Cross-application scenarios were often hard-coded by app developers who had little information about what other apps were on your PC. The charms in Windows 8 provided a connection for all apps to (appropriately) connect to both the system and other apps. The mechanism developers used to implement these connections will be discussed in the next section.

    From a user perspective, the Search charm offered simple method to search not only your PC but the information available via your apps. In Windows 7 we provided file search that worked across basic file types. In Windows 8 we extended this to include using search to find apps installed on your PC and to allow those apps to search the information and content available to them.

    The Share charm provided the ability to share content such as photos, documents, contacts, or any other data with other apps. If you were looking at a photo, a web site, or a document choosing Share would connect whatever app you were using to the selected app. For example, sharing a photo to a social network app would guide you through posting that app on the network. Apple added this capability to iOS 8 in 2014.

    The Start charm was an always available mechanism to return to the Start screen. It works just like the Windows key if you have a keyboard.

    The Devices charm provided the connection of your PC to other hardware and connected devices. The most common devices to connect to included printers, speakers for playing music, and secondary screens either attached by cable or the new at the time wireless screens.

    Finally, the Settings charm was the new Metro-style control panel. We knew this was going to be controversial for two reasons.

    First, the Windows control panel had evolved, or perhaps devolved, to be a catch-all for an array of mini-programs to manipulate all sorts of system settings. These programs could be complex such as full anti-virus suites or even the control panel for managing your mail accounts in Outlook, which clearly did not belong there. Many people had grown up navigating this complexity and were able to somewhat fumble around the arbitrary categories to find settings they sometimes needed to change. The Windows 7 control had a search feature, but it was difficult to deliver on the promise because so much of the control panel was provided by third parties and the vocabulary of the control panel amounts to a set of inscrutable technical terms unknown to most.

    Second, and painfully, from the start of the project we knew there was no way we would completely replace the current control panel in a single release. We were certain modernizing the control panel would take several product iterations. We built the Metro-style control panel to encompass the most common settings that also had the property of not requiring any device or OEM specific code. To get to additional settings the existing control panel remained and could be further updated.

    The final, and as many would soon be debating, the most important part of the Windows 8 experience was the Desktop, the traditional place for launching and switching apps, working with files, and the place many people stored files they worked with daily “on the desktop.” The Desktop was the very icon of the modern graphical PC.

    The design of the desktop in Windows 8 was straight forward—the Windows 8 desktop was the Windows 7 desktop, only much better. We improved the taskbar, file explorer, window management, the Task Manager, file copying, multi-monitor support, virtualization, and more. We even did what we could to embrace touch even more than we did for Windows 7. The improvements in the desktop experience in Windows 8 was probably greater in terms of developer hours than in Windows 7.

    The reason for this was straight forward. The desktop was where customers run the millions of existing, powerful, familiar Windows programs that were designed for mouse and keyboard. Office. Visual Studio. Adobe Photoshop. AutoCAD, and many more. This widely used software continued to be feature-rich, powering the bulk of the work people were doing on their PCs, at the time. Browsing and mobile apps all but took over home and consumer computing scenarios with a precipitous drop in the use of Windows apps for desktop publishing, photos, home finance, and entertainment. Many debated if the browser on the iPad was complete or powerful enough for these scenarios. Bringing the desktop forward (along with the metaphors such as manual discrete window sizing and overlapping placement) was a huge benefit when compared to tablets without these features or programs. It was an explicit design goal of Windows 8 to bring this software forward, run it better than in any previous version of Windows, and provide the best environment possible for these products as they evolved into the future as well.

    As the iPad grew in popularity the interest in traditional desktop apps grew with it. Many customers, particularly in business, wanted the power of apps like Excel or PowerPoint to accompany them on their lightweight, long-battery life, and travel-friendly iPads. There was a sudden rise in remoting solutions that might enable access to these tools. Unfortunately, for desktop apps touch was a poor substitute a for a mouse and keyboard, as we knew from Windows 7. Apple believed it would be a matter of time until these apps were ported or reworked for the iPad. We believed there was a role for a better operating system to support this scenario.

    To some the desktop was a thriving ecosystem rich with software capabilities. We certainly wanted to believe that was the case. In that spirit we did not think of the desktop and the million (or millions) of Win32 apps as legacy. It was our greatest asset. That’s exactly why in the last section the first question Intel CEO Paul Otellini asked us when talking about ARM was if we ran x86 software.

    Specifically, we didn’t view the desktop as “legacy mode” or “old mode” but simply as the interface tuned to running Win32 software with a mouse and keyboard. It was pretty simple.

    The core questions we needed to answer during these early days was “What if someone only ran desktop/Win32 apps on a PC without touch?” and “What if someone mixed and matched Metro-style apps with desktop apps?”

    We iterated almost endlessly on minute aspects of the design when it came to the main detail of going from the Start screen to the desktop itself. We blogged about the design choices and discussed them in many forums as the software was shown more and got in the hands of the first users.

    The design settled on several core user interface elements. First, both desktop apps and Metro-style apps were first class apps. The way we thought of this was that ALT-TAB, the primary way desktop users switched between apps, showed every running app. We treated the desktop itself as a launchable app, from a large tile on the Start screen. This followed from the way that MS-DOS was launched from Windows when Windows was introduced.

    Switching to the desktop was also available from more touch-friendly app switcher that was available via swiping from the left edge of the screen or mousing in the upper left corner. The desktop itself could be accessed by moving the mouse to the lower left corner, exactly where the Start menu was located. A user that instinctively scrolled or tracked to the lower left and clicked would be brought to the desktop.

    Perhaps the most visible example of the desktop and Metro-style living together as equals was the way Metro-style apps could be snapped next to the desktop, mixing, and matching as one needed. The existing desktop-based Windows snap feature introduced in Windows 7 received some enhancements and continued as expected weather running only the desktop or with a Metro-style app snapped as well.

    With these designs in place, it was time to show people outside Microsoft what we had been doing. We knew there was a lot to absorb and a significant amount of complexity to communicate. We decided to roll out the product in stages. Each stage aimed at a particular audience and sharing a particular part of the product. Recognizing the magnitude of the user experience change, we chose to begin the roll out at the most influential of industry events and almost simultaneously at the most important partner event.

    I was a very weird combination of terrified, excited, and confident. I was terrified by just how much we had to communicate and how complex the message was. I was excited because the work was making so much progress and was so innovative. I was indeed confident because I’d seen how far we’d come and, frankly, I knew we had made the right bets given the state of the platform and what needed to happen to keep Windows relevant to a new generation of customers and strategic for the ecosystem.

    We chose to unveil the first US demonstrations of Windows 8 at the All Things Digital D9 conference in June 2011. Just prior to the conference we seeded excitement for the conference itself with a preview. Jensen and team recorded a rather “DiY” video from the UEX program management design room. This video introduced the Windows 8 user interface. We dropped this video and quickly millions of people watched it. That might not sound like a lot today, but in 2011 millions of video views for a demonstration of a new OS was a lot.

    We also showed a preview of Windows 8 in Taiwan at Computex just before D9. We took advantage of the time change so both demos were essentially in the same news cycle. This show is where all the Asian manufacturing partners and worldwide OEMs gathered to see the latest that would make for new PCs in the coming months and year. MikeAng and the Ecosystem team were responsible to this community and put on the demonstration. As with the above video, the word and images of this demo raced around the world.

    D9 was going to be incredibly special, but we were, frankly, terrified. JulieLar along with JensenH and a half dozen others were scrambling the night before to get all the pieces to work—it wasn’t just the software, but we were going to show a bunch of hardware that also barely worked months before the official release.

    I joined Walt Mossberg on stage for an interview as JulieLar prepared to demonstrate her team’s designs for the first time. On stage, Walt peppered me with a lot of difficult questions about the state of Windows and even Microsoft. It was a particularly brutal year for Microsoft as the industry had coalesced around a notion of “Gang of Four” companies leading the future of technology, a phrase coined by Google’s CEO Eric Schmidt, no fan of Microsoft. His view of technology leaders of the time included: Google, Apple, Amazon, and Facebook. Walt wanted to know what it felt like to be left out.

    “Well,” I said, “nothing called ‘Gang of Four’ ends well.” Only a few years later, that inappropriate moniker would be replaced by the annoying FAANG, again without Microsoft.

    Walt pressed more on Microsoft missing mobile and Windows tablets not taking off and focused on them not really having a touch-centric operating system. What could I say? He was right. Where Walt really pressed was on the quality of PCs. I suggested that Windows could be a great tablet (especially on ARM) because it brought the great parts of Windows.

    Later that evening, some said to me that Walt was particularly hard on Microsoft and me in the interview. I didn’t see it that way. Everything he said was both true and exactly why we designed Windows 8 as we had. It was interesting to be on the empathy or sympathy end for once though. I wasn’t sure how I felt about that.

    Almost on cue, Walt asked how we would show off a tablet. It is important to note the context and the importance of tablets to the audience and industry. They were a huge new development, and everyone was super excited by them. Recall, Walt’s review of the iPad and how he favored using it over his Mac or ThinkPad. That was Julie’s cue and I receded.

    Julie came on stage with a battery of screens and computers, all working and ready, with a team backstage, collectively fingers crossed. I hovered behind her out of nervous energy, while Walt and Kara Swisher poked at screens and asked questions while Julie tried to stay on her script.

    The team had created a Lucite stand for a bare 10.6-inch touch screen display and the insides of a laptop, making for essentially a see-through demo tablet, like one of those old Ma Bell telephones made of clear plastic. The screen represented an ideal design size for a tablet, wide enough for a full-size keyboard, enough pixels for an HD movie, and able to show two apps at once. It was also the typical resolution of over two-thirds of laptops being sold.

    Running through a fast and fluid demo, Julie showed off every key aspect of Windows 8 and Metro-style: design and typography, Start screen, gestures, immersive apps, snap, touch browsing with Internet Explorer 10, the on-screen keyboard including a split view (Walt had not seen before), and of course the complete support of the desktop apps everyone uses.

    Walt called this the “biggest shift in Windows.” That was true. Julie finished the demo by showing the other PCs on stage, all Windows 7 PCs made better by Windows 8: a regular laptop, a slate, and a full convertible Tablet PC. Phew!

    But once Kara and Walt asked the audience for questions, attention turned to the notion of “two modes”—desktop and Metro-style tablets—and a nagging concern planted itself in my head. How would this play out? What could we do better? They weren’t really modes any more than MS-DOS programs were modes. Or were they? We didn’t come up with a good way to explain it. And if we had to explain it, we were already in trouble.

    To distill the feedback at the show and what we subsequently discussed in the b8 blog, the issue was that to some observers the desktop had been somehow relegated and obscured by the new Start screen, leading to three reactions. First, this created an impression that launching to programs (or files on the desktop) was an extra step away after starting a PC. Second, that going to the desktop was entering some secondary mode of Windows. Third, there was a visceral reaction that the Start screen was simply too big and got in the way.

    The Start screen itself was (literally) a big Start menu and the desktop was (literally) still another program that was always running. In Windows 7 we even made the desktop show up when using ALT-TAB to switch between programs, a fan favorite, so the desktop already looked like another program. But making that clear was the challenge.

    We continued to iterate and polish the design. Should we have a Start button or not? Should we have an option to boot the PC directly to the desktop? There were dozens of small changes made to the design and a series of several blog posts of thousands of words from the team discussing feedback, questions, and design choices and alternatives.

    As with every past release of Windows there was disproportionate attention on the basic user interface for launching programs and manipulating files—I say disproportionate because absent applications these are the features used to evaluate a new release and compare to the previous one. These features are also disproportionately used by tech enthusiasts, particularly file management and organization. Windows 8 was no different.

    This focus tended to overshadow that Windows 8 was moving up the stack so to speak and redefining the basic operating system to include innovation in the other essential ingredient of offering an end-to-end experience, internet scale or cloud-based services.

    Operating on a different timeline through all of Windows 7 and then Windows 8, the Windows Live team steadily improved email, calendaring, storage, identity, and a host of other capabilities under the Windows Live branding, mainly bringing the capabilities of scalable (and cost-effective) cloud storage for every Windows 8 PC and customer, along with account management, email, photos and video, editing, and more. On their own each of these services aimed to be competitive within their categories, even going up against Google’s Gmail after a long road to improve the scale of Hotmail, renamed to Outlook.com. Even cloud storage, launched as SkyDrive in 2010 and then rebranded to OneDrive in 2013 due to a trademark conflict, had essentially under the radar become an integral part of the Windows 8 experience for roaming files and settings across multiple PCs and mobile phones including Android. We also integrated the new Office web applications (Word, Excel, PowerPoint for browsers that were started way back when I was still in Office) with both storage and email.

    At the 2007 launch of these services Paul Thurrott even went as far as to say, “Looking ahead, Microsoft is working on the next generation version of Windows Live, which it sees delivering as a set of software and services suites, one each for the browser, Windows PCs, and Windows Mobile-based devices. The browser suite will provide common branding and user experiences, and will work in both traditional, PC-based browsers as well as mobile phone browsers.”

    There was much to be done, more than we had time for, but all the pieces were part of the plan and executing well. These services were shipping separate from Windows on their own schedule, with frequent updates and at the same time planning on aligning with Windows 8 as the critical cloud and service elements of the product.

    The innovations (and inventions) across the experience and the platform prepared Windows for a new generation. The modern Metro-style experience in Windows—fast and fluid, immersive, touch-first, and connected—brought a PC up to the standards of the smartphone, which had defined a new level of interaction. Metro-style apps were a missing piece. Without apps, we had nothing but a nice project. We needed developers.

    The experience described in this section was designed with a broad range of new operating system capabilities that would be made available to developers so they could build Metro-style apps—we called this the WinRT platform.

    A couple of months after the iPad was available to customers, it was clear it was a hit product. By the June 2010 earnings call, after only 80 days in market, inventory constrained, and, still not shipping all models, Apple sold three million units for an estimated $2 billion. At the time it was the fastest-selling mobile device of all time. If I wanted to brush it off, I could have argued that PCs were selling that many units every few days or perhaps the iPad was just Apple’s Netbook. But that would have been short-sighted. As we knew, the iPad and the iPhone (and to some degree Android) were drawing all the attention of developers. Developer attention meant there was durability to the platform. The iPad ran all iPhone apps, and rapidly apps were being tuned for the larger screen of the iPad.

    It was our turn to woo developers. The stakes were high.

    But first, why do we even need a new platform to woo developers? We already had Win32, and .NET in multiple flavors, plus Internet Explorer and the web platform. There was a schism between the Windows platform and the .NET platform that had emerged over the past decade and went unchecked. It fell to the Windows 8 team to bridge this widening gap, but we had a product to build and the world was changing fast.

    On to 103. The End of Windows Software



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Welcome to Chapter XV! This is the final chapter of Hardcore Software. In this chapter, we are going to build and release Windows 8—reimagining Windows from the chipset to the experience. First up, the chipset. Then there will be sections on the platform and the experience. Following that, we’ll release Windows to developers and then the public. Then a surprise release of…Surface. There is a ton to cover. Many readers have lived through this. I’m definitely including a lot of detail but chose not to break things up into small posts. There are subsection breaks though.

    This first section covers the chipset work—moving Windows to the ARM SoC. Before diving right in, I will quickly describe the team structure and calendar of events that we will follow, both of which provide the structure to this final chapter while illustrating the scope of the effort.

    Back to 100. A Daring and Bold Vision



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Hardcore Software has shared the vision planning process for five releases of Office and Windows 7. Though not detailed we followed the same process for two waves of Windows Live Services as well as Internet Explorer 8 and 9. Windows 8 went through this same process, though by now as a team we had become pretty good at it. This section details the resulting Windows 8 plan, The Vision for Windows 8. As part of that, I wanted to take a bit of a journey into the alignment between Windows Phone and Windows 8 and the challenges we saw there. In doing so, I will describe things from the Windows perspective and not delve into the specifics of running the Windows Phone project, which wasn’t my responsibility. Rather, I wanted to cover the challenges of two large projects within the context of Microsoft each trying to figure out what they needed to do. Since 2010-2011 when this took place, it is the strength of Apple’s approach of starting from a reinvented desktop operating system for the iPhone and building out from there that makes the events of this time strategically interesting. As I frame events, the key questions to ask would be “Should Microsoft have waited?” and “Would it have been ok to not be in market with any phone after Windows Mobile 6.5 until 2012 or even 2013?” At least that’s how I reflect on these times. The answers are not complete as the next chapter will also cover some important aspects of this in more detail, particularly the hardware and platform elements.

    This section could really be a chapter and isn’t for the faint of heart. Dig in and have fun because it covers a lot of ground that took place in a relatively short time.

    Back to 099. The Magical iPad

    We never lacked clarity in what we intended to do—reinvent Windows for a new era. That meant a new experience, a new platform for developers, a new connection to services, and, yes, new hardware. Books about reinvention (or disruption) don’t tell you that you can’t just announce such intentions to the world. It turns there is the intention to reinvent and then a plan to build it, though when you decide to share a strategy is an entirely different matter.

    I was old enough to have personally lived through the creation of the term Osborne effect as it pertains to pre-announcing products. In high school I programmed my father’s Osborne he bought to keep the books for the family business. We’d been using the Osborne daily since it launched and thought about buying an upgrade for the business. Then Osborne founder and CEO, Adam Osborne, pre-announced the Executive, a fancier model with more memory and a bigger screen. The only problem was it was far from being done. The prototype product was shown in 1983. Customers held off buying the original Osborne long enough that the company went bankrupt before delivering the new computer.

    Apple faced a similar dilemma when it first transitioned from Motorola chips to PowerPC chips. The transition was announced in 1994. While it is difficult to tease out the impact of Windows 95 from the chip transition, Apple’s share of the Mac/PC market would steadily shrink for another decade, and decline significantly in absolute unit sales, until the next chip transition to Intel.

    Microsoft had long been relatively immune from pre-announcing products because the overall growth of the PC market combined with the breadth of the product line dampened any pullback in a single product. The PC needed an OS even if the next one was delayed. As we saw with Longhorn/Vista, businesses still continued to buy PCs in droves. That’s why many Microsoft products seemed to be talked about long before they were released. There was an added benefit to this early sharing, or as we called it openness, which was used to generate platform momentum. With nothing to lose, Windows itself benefitted from a solid 5 years of momentum building in the early days before Windows 3.0. Back then it was all just an industry norm.

    In our world, the Windows business had just survived the Longhorn mess and recovered with Windows 7. We now faced an entirely new market situation. Windows itself faced structural challenges—actual alternatives in the market in the form of phones, tablets, browsers, Intel-based Macs, and soon ChromeOS. At the very least, people could just stick with Windows 7, which was fine by us, except given new alternatives most customers would not even consider buying a new or additional PC, which was very bad for us.

    Every bone in the Microsoft body would cry out to begin evangelizing Windows 8 as soon as we had plans. We were going to build a new platform and evangelists wanted time to articulate the strategy so developers could weigh their alternatives. But doing so would also run up against Windows Phone 7 and the platform they were evangelizing. The lack of strategic connection between Windows and Windows Phone was obvious but at the time was fraught with difficult choices, especially in the context of competing with Apple.

    Then there was the biggest of all problems, again something they fail to mention in books. What if the big strategic bet we planned on making ran right up against our biggest partners and customers? The whole idea of advancing Windows without our partner Intel and the major PC makers Dell, HP, and Lenovo would be heresy, plain and simple. We are talking about Wintel after all. Not only were we planning on a chipset, but we knew we would offer something radical with respect to the actual computer we’d offer customers. Double heresy.

    As a result, the vision we created for Windows 8 was not specific in the broad communication about the role that alternate chip platforms, SoC or system-on-a-chip, or new hardware would play in the plans. By using the term SoC we could account for both ARM, the then UK company that designed the chips used in all mobile phones and tablets, and Intel who continued to work to develop a competitive SoC with their ATOM branded chips. While the engineering was well underway, the degree of the bet still needed a bit more data and experience to decide if we could execute specifically on bringing Windows to ARM. By using SoC, if the term leaked, we could always point to Intel’s latest ATOM chips as the goal. The cost of openly defying our own ecosystem and then failing to materialize would have been immense given the state of the PC market.

    As far as how these alternate chips would come to market, we had not yet decided on a complete plan. Would we go the standard route which was to evangelize to the OEMs as we did with Media Center PC and later multitouch support? Would we build a first-party demonstration device and use that as the basis of evangelism as we did with Tablet PC? Or would we commit to what was either unimaginable or incredibly dumb depending on if you were our customers/partners or BillG and design, build, and sell, our own new device? The vision would be silent on this. The plans were still being made and would be resolved just a few weeks after the product vision was communicated.

    With the iPad announcement described in the previous section, the pressure on everyone to respond was immense. For some the rise of Android was even more worrisome, primarily because of the history of Apple being less of a real threat. Google caused more consternation because it was growing so large so quickly in an area entirely new to Microsoft. Either way, even though Microsoft was the among the first to enter the smartphone market, by 2010 our share remained in low single-digits and would never get much higher.

    The app revolution underway in Silicon Valley was on the iPhone. And we were missing it. I learned this firsthand sitting in the pouring rain after a wedding in San Francisco in 2010, when everyone else was using a new iPhone app to summon a limousine late at night. It was my first experience with the UberCab on-demand car service, and we couldn’t summon one on my Microsoft phone as we became drenched. The app gap was just getting started. Every day it seemed like a new company released a new app for the new iPhone platform and none of that innovation was happening on Windows. I swear it felt a bit like what IBM must have experienced when everything new was on Windows and all that was on OS/2 was what they paid to have there.

    Windows and Windows Phone: Alignment?

    The Windows Phone Team was in the midst of the significant reinvention of the phone platform, originally code-named Photon, which became Windows Phone 7 or WP7. The fall 2010 release was more than six months from the rollout of the vision for Windows 8.

    There were many challenges in entering, or re-entering, the smartphone business. Bootstrapping an ecosystem was chief among them. Finding the right hardware partners proved difficult in the face of competition for those same partners from Android. There was also the challenge of building a differentiated product when the high-end was so solidly iPhone while Android seemed to cover the breadth of the market. To many reading this, the market might appear analogous to the evolution of the PC market, except Android has the role of Windows and the premium niche occupied by iPhone is much larger than the eight share points Apple computers achieved.

    WP7 faced several software challenges simply because the core operating system was so old. Support for the latest capabilities across graphics, networking, multicore CPUs, removable storage, and devices (such as the NFC reader required on Japanese mass transit) was becoming increasingly difficult to impossible, as was enabling the product to work worldwide across languages with complex characters and input methods. A hot topic on the heels of the product release was the rollout of 4G or LTE technology, especially in Asia, and the challenges WP7 had in building support in the operating system.

    When it came to synergy with Windows the developer platform was the key challenge. Windows Mobile, Microsoft’s original phone OS, supported a subset of the Developer Tools strategy which had a nominal relationship to Windows itself. Breaking with that platform, WP7 made a new bet on the nascent Silverlight project, a decidedly non-obvious strategy relying on a technology project essentially salvaged from the prior big bets in Longhorn as I described when I supported moving the team and early project out of Windows. As a competitor to Adobe Flash, it seemed most suited to lightweight games, video, and photos designed for a cross-platform experience. A significant class of apps, sure, but as a general-purpose platform, a Windows platform, Silverlight was inadequate and would face challenges in maturing to a complete platform. For example, there was little hope of building an email client or Office in any enduring way using Silverlight’s capabilities as they were. That was certainly my opinion. They made a strategy that was first and foremost based on time to market and that leveraged a new initiative in Developer Tools that many were very excited about, including the evangelists. The use of Silverlight felt leveraged and very…strategic. This will be further discussed in the next chapter. Given the break from Windows Mobile, there was an opportunity for a strategic platform alignment that was not taken advantage of.

    By the time it released, with its stark Metro tiles and hubs rather than apps, Windows Phone felt unique and differentiated, despite the potential limitations. The Windows community would develop a wildly enthusiastic following for WP7, despite the paucity of apps relative to iPhone. Walt Mossberg wrote a review with the headline, “Microsoft’s New Windows Phone 7: Novel But Lacking.” He concluded “…Microsoft has used its years in the smartphone wilderness to come up with a user interface that is novel and attractive, that stands out from the Apple and Google approaches, and that works pretty well.” Ultimately, he did not recommend it and said it was neither as good nor as versatile as iPhone or Android.

    Sales for Windows Phone would peak roughly a month after initial launch of WP7 and would never match that launch spike.

    From my perspective, Microsoft’s phone business faced many challenges but chief among them was a weak platform for building apps, and related, an underpowered operating system relative to modern computing needs and competitors, even with the new WP7. After launch there were rising numbers of Windows Phone apps, but the vast majority were simply web sites hosted inside an app without much, if any, work from the owners of the web site. The flagship apps were mostly paid for or even supervised by Microsoft. The lack of enthusiasm from third parties did not match the optimism within the halls of Microsoft and from our biggest fans. Being the third choice in the market, even a Microsoft choice, was not enough.

    I’d soon learn the size of the hill that needed to be climbed for a new Windows.

    The industry, however, remained optimistic that WP7 would be a contender for the number two spot behind Android, as Blackberry and Symbian (from Nokia) continued to decline, and iPhone was viewed expensive and low volume. IDC predicted even as late as 2011 that Windows Phone would go from about 3.8% share to over 20% share by 2015.

    Regretfully, that 3.8% share was the platform’s peak.

    We planned Windows 8 while WP7 was still finishing. We were about nine months ahead, both teams working hard to achieve a shared view of plans with what might become both Windows 8 and Windows Phone 8.

    The problem was Windows Phone and Windows did not share aligned schedules, which made most efforts at synergy optimistic at best or an academic exercise at worst. In a quest for synergy between groups, I long advocated to align dates, especially with respect to making firm commitments. In all Microsoft history the only time teams successfully collaborated was when schedules aligned from start to finish and shared a complete focus on that one goal. We seemed stuck, however, unable to align our differing views of how fast we could move and what should be a collective priority. Should we have stopped working on Windows or stopped working on a Phone? These questions seemed silly since obviously neither made sense at the time. I was fond of repeating a mantra that sometimes to speed up we first must slow down. To most that came across as dumb, especially in the context of gaining prominence of Google and Facebook leading with speed. When groups aren’t aligned getting aligned is extraordinarily difficult and requires painful choices. Was Phone ahead of Windows? That couldn’t be because it was built on an old code base. Was Windows ahead of Phone? That couldn’t be because we had not done any of the work to be a mobile OS yet. Should we have not released WP7? Should we have stopped working on Windows 8 until after WP7? Would we then have done another new reset of WP7 instead of a fast follow-on release? These were non-trivial questions to ask.

    Apollo, the code name for Windows Phone 8, would start well after Windows 8 had started, plus there was the interim Windows Phone 7.5 release, code-named Mango. We could have aligned on many things with a schedule where Apollo finished after Windows 8, but the Phone Team did not want a schedule to stretch out that far. Our planned Windows 8 availability was in June 2012. To the Phone Team, finishing Windows 8 code in April would not have allowed enough time to make holiday of 2012 with phones. From my perspective, our April 2012 date for Windows 8 was aggressive. If Windows 8 were the next Windows 7 then in all likelihood we would have hit the date, but given the massive scope of the project I could not be so precise (shades of schedule chicken with my friends in Germany a decade earlier.)

    Absent alignment across the full schedule, at every step the teams would be doubting each other, making alternate plans, or hedging on alignment. In this case, Windows Phone appeared to me to have more confidence in their target dates than I did in mine. Depending on the scope of changes in the phone plans that seemed plausible.

    What would an aligned plan look like?

    It would be a Windows Phone built using the new Windows 8 operating system. A plan like this would address both the operating system shortcoming of WP7 and the limitations of the Silverlight platform. Executing would have been straightforward, assuming the Windows Phone business was comfortable with the schedule and doing the work to make sure Windows 8 ran on the hardware phone partners required. The effort to bring Windows 8, as planned, all the way down to a phone level of hardware was significant but with a fully committed effort from the Phone Team it was entirely possible—Apple proved that by bringing OS X to the iPhone.

    A plan where Windows limited itself to what was in Windows Phone 7.5 amounted to simply delaying the start of Windows 8 for another two or more years. This quickly began to feel like Longhorn all over again. Besides the WP7.5 code base and developer platform were no starting point for a long-term platform play and the phone required far too much of Windows anyway.

    This was another round of strategic handwringing or unrealistic thought experiments since no one wanted any team to slow down, stop, or rethink plans resulting in slowing down or stopping. No one.

    I was certainly not going to force the issue. Another team’s schedules were their business, and I had enough concern over executing our aggressive plans. I retreated from the debate. I wasn’t willing to force my idea of a plan without more support from Windows Phone.

    Everyone was concerned about time to market—who wasn’t? Tens of hours in meetings were burned up on hypothetical conversations about aligning on overly specific features or creating seemingly impossible synergies where the whole conversation was built on a foundation of dates that were not accurate to more than three to six months. Creating mythical savings by shaving weeks off testing or partner releases was academic. These handwringing conversations drove me nuts, like my 8th grade science teacher’s reaction when I used too many significant digits just because my TI-35 could display them. We needed a coarse and big bet, not trying to gerrymander a solution without disturbing existing plans, existing plans that were not getting us to a stronger position.

    In these discussions, we found ourselves detailing minutiae of Bluetooth drivers and graphics subsystems and how to share those, which were good goals but entirely divorced from the broader challenges of alignment or step-function change required to earn a seat as a leader. We seemed unable to discuss the deeper strategic issues like the developer platform or the need to move all of Windows Phone to a new OS and that a new OS had to be built first. There was nothing specific to Windows and phones in this situation; rather, this was a typical Microsoft pattern of strategic dysfunction. When faced with a big problem, we’d devolve into a debate about tactics, trying to find some alignment with minimal impact, then we’d surface with disappointing or unrealistic plans. It was the classic big company dynamic where the PowerPoints got longer and more divorced from reality and then one person in the room had to step back and say, “wait a minute.” That was usually me or JonDe.

    Even within our own Windows team, there was a small group in JonDe’s organization that wanted Windows 8 to bet on making a phone from the start. Several key leaders from the Windows NT era were now on the Phone Team and remained close to those in leadership roles in Windows, creating a natural collaboration point. It was a totally rational bet, after all, because that was what Apple had done when it reengineered OS X to be the basis for iPhone and iPad and it was also what Microsoft needed. Apple’s mobile product did not, however, need to maintain its existing business and at the same time move it forward. It also lacked the heft of the Windows ecosystem and moving the product to SoC/ARM in a single release. It also lacked the reality of being third to market in a new and clearly defined category.

    I give huge credit to those on the Windows team that pushed to build a phone. As audacious, and that’s the right word, as our Windows 8 plans were shaping up to be, building a phone from within the Windows group was one step more than we could take alone. These were difficult conversations to be had. The team heard the desire to push harder and responded. Had the company been a bit more receptive in a real and committed, and not symbolic, way then perhaps I might have been convinced to reach even further than we did with Windows 8. No doubt I was already nervous about the Windows 8 plan and as such there’s no escaping that adding one more constraint—not building a phone—was welcome.

    Apple had the benefit of a clean break with nothing to lose when it built the iPhone. We just did not have that option, or at least did not consider it.

    My feeling, and decision, was that competing with the iPad was an enormous leap for Windows 8. On paper, the idea of building an OS on ARM, a new developer platform, an app store, a new user experience, and working on a new generation of hardware seemed like an absurd bet for Microsoft to get done all at once with its flagship product. In big company lingo, it was a stretch goal for the ages. I was worried about the problems that befell IBM when it embarked on Project Stretch, trying to get too much done too soon. That was hubris.

    Building a phone would have been an irresponsible effort from just the Windows team. By constraining the first release of Windows 8 to devices that were tablet sized, we had the chance to achieve success in an area that was already our strength, productivity and information work. Tablets had a bigger screen, more room for user interface, more area for battery, and a logical size for a keyboard. Therefore, during our planning for our team, I drew project boundaries at targeting a tablet device rather than a phone device. We knew we could do a phone in the next iteration. In fact, as we’ll see the engineering leaders were already making proof of concept progress in this regard. That’s why it felt rational to constrain now and expand later.

    Why did building a phone now keep percolating up as an idea? Primarily it was because of the operating systems effort of leapfrog products. The old management view of developing two products at a time and in theory being able to spend more time developing a future release while doing tactical work in the immediate release. Recall this methodology from the sequence of Windows code names, Chicago/Cairo, Nashville/Memphis, and so on. Also recall, the second release never made it to market.

    The idea of the Windows 8 team building a phone while the Windows Phone Team built its own Windows Phone on a different code base seemed to follow this logic. It seemed logical to a company that still held a collective, though slightly revisionist view of the past, which was how we cleverly worked on both the original Windows (16-bit) code base, transitioned that to 32-bits, while also building a native 32-bit Windows NT to become the real future of Windows. That is a great story, but leaves out a ten-year story of warring groups, competitive missteps, failure to deliver, and the pure luck that both IBM and Apple all but failed at everything they did, while all the apps worked across all those variations of Windows. We were in no position to endure or accomplish any of that—the paradigm shift was one we were responding to, not creating.

    As we will see, it isn’t clear that delaying one or the other product would have fundamentally changed the trajectory once the bet on WP7 was made, a bet that was made in the middle of building Windows 7.

    As we pulled together and began to circulate the draft vision memo the excitement and chatter in the hallways increased. The team was anxious to start work. Microsoft was always at its best when it faced a crisis, and this was a genuine crisis. Those working on the deep parts of the operating system were super excited about “porting” to SoC (I mean ARM specifically as Windows already supported Intel SoC) as that goes back to the roots of Windows NT and supporting many chipsets. They were confident. Those thinking about building apps were extremely excited about building a new Windows API for modern secure, power-efficient, multitouch applications and the challenge to do so as a core part of Windows and not another “random” new runtime from Microsoft. The user-experience team had fully embraced the idea of melding consumption and productivity.

    The one question everyone had was about the target computer—where was our iPad and who would make it? Everyone was asking, “Would Microsoft build a computer?” There was little faith the ecosystem could produce something as magical as the iPad. There was a great deal of hope that we would, as a team, actually do the thing everyone thought it would take to truly enter a newly competitive market and redefine what Windows could do.

    Whenever asked I said, “I don’t know.”

    I definitely wanted to, but we had not yet committed. I’d been lobbying for a couple of months to make that happen. I knew we had to and knew that not doing the hardware would be the same compromise Microsoft had always made and I was certain we would fail as we had in the past. I needed to be very careful about words and body language when it came to hardware. As I saw with the fallout of the Courier project, the OEMs (reminder: Microsoft’s largest customers) did not take kindly to first-party competition. I would maintain this position for the start of the project.

    Confidence was high. Our plan for Windows 8 was daring and bold, even audacious. As I write this today reading the plan and in particular looking at the prototypes from Vision day, I am struck by just how much we set out to do. Spoiler alert: we actually got most of what you can see in the videos done for the release.

    On March 24, 2010, six weeks following the iPad announcement, 6,000 Redmond-based employees gathered at the Seattle Convention Center for the Windows 8 Vision Meeting. We invited execs from around the company in a VIP section in the front rows, including Office, Development Tools, Server, Mobile, Research, and more. As in the past, the Company’s product executives received the Vision memo and even the opportunity for a small and focused version of the plans for their staffs if they wanted.

    Julie Larson-Green led the Vision process and the resulting memo and plans. She and the team perfected the best of top-down, bottom-up, and middle-out planning. They also perfected distilling the plan down to something thousands could understand and presented it in a compelling manner.

    The admin team once again did an amazing job on the whole event, including this time getting transportation from Redmond to the downtown venue. They even secured the space that was already staged for the Microsoft TechReady field training summit. We just had to promise to end on time because they had booked the venue until the end of the day. We saved hundreds of thousands of dollars this way. So proud!

    We had the full range of collateral for the vision: the 30-page long-form vision memo, the two-pager vision everyone would tack to their corporate-standard cork board, prototype or sketch videos for each of the six vision areas, and a mock press release. The only thing we were totally silent on was the big question of building our own device.

    The Windows 8 mock press release was my favorite. With the headline “Microsoft Redefines the PC with Windows 8: Breakthrough Release Brings Customized Experiences for Consumers, a New Business Model for Developers and Strong Value for Businesses” the opening graph read:

    Redmond, WA - RTM Date – Microsoft today announced the completion and Release to Manufacturing (RTM) of Windows 8, a breakthrough release focused on humanizing the web. Designed for more natural interaction with the PC, Windows 8 enables a consistent and comfortable experience across the screens and devices you use every day. The Windows 8 experience scales consistently across the broadest array of devices: from small-screen slates to PCs to large-screen TVs, yet with no change in system requirements from Windows 7. In addition to its groundbreaking user experience, Windows 8 enables developers to create compelling experiences faster, easier, and reach new markets quickly.

    The theme of the day was “8” of course. Everything was about the number eight, right down to the use of Saturday morning TV videos on the number eight—"eight, eight, eight let’s sing a song of eight.” The theme of eight continued with a walk-in video of trivia questions related to the quantity eight.

    After a brief introduction I asked SteveB to come on stage to provide context on the importance of Windows to Microsoft. His energy level always exceeded that of most and was always difficult to follow as the next speaker. I did not tell him in advance, but for his walk-in video I used what became infamous, a Windows sales video from 1985. It featured him in a tacky plaid coat doing his best “and that’s not all” description of all the features of Windows 2.0, “including Reversi,” all for $99. There was a great line at the end. He announced that the offer was not valid in Nebraska, an inside reference to my former manager, original leader of the Word (then called OBU-Office Business Unit) team, and SVP of the sales force Jeff Raikes’s home state. Most on the team had no idea this video existed and were puzzled by that line at the end. We also showed the now famous “Developers, Developers, Developers” rallying cry which was all too appropriate for the event. While intellectually I understand why some might make fun of these performances, not only are they classic Microsoft events they show the genuine and electric passion SteveB had for the company and the mission we were on. They are as much a part of the success of Microsoft as any software we wrote. Steve probably used the phrase and same passion 1000 times, but it was immortalized in this video. In a later discussion we had on the audio app Clubhouse, Steve discussed what it was like to galvanize thousands of people and why such passion was important as a leader.

    I remained backstage and after Steve exited the stage, we walked over to the front row and sat down. I was right behind Steve and would narrate and provide color to the presentation as it progressed. He had of course already read the vision and we had our usually back and forth. Steve was an active listener and always had lots of questions during a presentation. I had sent the written vision to BillG as well but did not hear back much other than a renewed emphasis on the pen and handwriting.

    Entering the stage to the Schoolhouse Rock video “Crazy Figure 8” JulieLar presented the vision of the product. The words she used repeatedly were “modern” and “tailored web,” which was our way of describing the new apps for Windows 8, the Windows runtime, and the tools required. The “tailored web” phrase was awkward and my doing, but we meant apps. It was meant to reflect our view that apps were taking advantage of the web platform, not just another runtime API on top of Windows—it was Windows.

    Julie presented six vision areas. It is worth going through each one in this text. Note for the web version the vision area slides used at the time are provided and area’s text description is followed by the video shown the team.

    * Designed for a Tailored Web Experience

    * Engineered for Tailored Web Development

    * Connected and Ready to Use

    * Enhanced with Services

    * Approaching Consumer Electronics Quality

    * Future-proofing the OS

    While I was saying “chipset to the experience” we always started describing the product from the experience. In most every way the audacity of the release was most visible in the experience, though by no means contained within. From there we moved deeper into the implementation.

    The descriptions below are taken from the Vision Memo. Each of these high-level descriptions was accompanied by detailed subpoints articulating specific features that were (for the most part) delivered in the release.

    Designed for a Tailored Web Experience. In Windows 8, we will create an experience optimized for consumption scenarios, and the form factors and input methods that enable them. We will design a modern user experience that re-imagines the spirit and benefits of the web—connected experiences, informing in real-time, built on open platforms—but evolved for modern PC form factors. We will cultivate a new ecosystem of applications, optimized around multi-touch, sharing, people, and connection to devices, which together will create a web experience tailored to Windows. We recognize that customers store information about their identity, people, and the services they use in the cloud, and we’ll rely on Windows Live or other compatible services to complete our tailored experience.

    Engineered for Tailored Web Development. In today’s world, the web is an increasingly central part of how customers use computing devices. Given the broad customer reach, the ease of deployment, and the potential for monetization on the web, developers—particularly new developers entering the market—have focused intently on the web and have utilized the standards-based web technologies of HTML, CSS, and JavaScript. Today, Windows offers very limited value to web developers, who cannot use web programming models for reliable access to the capabilities of the rich client. In Windows 8, we will blend the best of the rich client with the ease of deployment and openness of the web platform and enable developers to easily create, sell and deploy applications that take advantage of native system capabilities such as touch, media, hardware acceleration, connection to devices and services, as well as the capabilities of specialized PC form factors. Windows Live continues to be an important showcase for our Windows offering and a growing part of our platform message. Windows Live focuses on being a premier developer for Windows and provides additional services, including Windows Live Connect, which developers can use as they build their applications.

    Connected and Ready to Use. Connectivity is the oxygen of digital life. People expect to connect to people, websites, data, devices, and applications whenever and wherever they want. They don’t want to think about connecting, they expect to just BEconnected. They’re frustrated if connectivity is cumbersome or unreliable. Windows 8 PCs meet these challenges by easily connecting to people, places and data over any network; making network and device connectivity snappy and responsive on a wide range of hardware; intelligently managing network resources; delivering on the promise of connecting to work or home from anywhere; creating rich and delightful scenarios with wired and wireless devices; providing hosters more economical options for scaling their services; giving organizations the network policy controls to meet their security and compliance goals; creating more opportunities for Telco, retail and IHV partners to participate in and profit from the Windows experience; and offering programmers the ability to use the most important functions of Windows network and device capabilities in modern applications.

    Enhanced with Services. Web services are an established part of daily life, connecting people, devices and information in a diversity of contexts—home, work, and in between and in an increasing number of geographies. Seamless web services that understand context can move software from the mundane to the modern. Services help realize the promise of connected technologies, dynamically delivering digital assets—content, media, applications—into contexts that delight customers. Services enable feedback loops and improvements over time, demonstrating customer-driven decision-making, founded in data, and ensuring that the experience becomes increasingly relevant and richer. Services in Windows 8 enhance the customer experience when a connection is present, while respecting a quality baseline experience in a disconnected state. Windows 8 services help us connect with customers, understand their needs and pain points, and enrich their experience over time through dynamic, contextual delivery of content and applications relevant for their locale and in their preferred language.

    Approaching Consumer Electronics Quality. In the early days of the PC, most people who owned a computer were hobbyists or technical enthusiasts. As enthusiasts, their expectations were generally low around how much extra maintenance, repair, and tinkering were required to keep PCs running well. In fact, the amount of skill and know-how to keep a PC in top shape was often the source of bragging rights and status among enthusiasts. The Windows 8 PC is easy to own and operate, from the day it comes out of the box to the day the owner decides to retire it. Building such an experience starts with our partners, who design hardware, drivers and applications that are a considerable part of the customer experience. To deliver a great PC to customers, we ensure these partners have the tools and data they need to create and deploy Windows images, assess their quality and solve any problems discovered. In many ways this is a journey that started with Windows 7, where we took a major step forward with both in-box quality and with the work we did with our partners to deliver great systems. Windows 8 takes the next step, with additional capabilities built into Windows as well as tools to help our partners deliver high-quality systems. Hardware platform investments and shared bets with our silicon partners, OEMs, ODMs and IHVs increase system quality. These investments result in the most stable, responsive, power-efficient and low-maintenance Windows we’ve ever produced, running on systems that take full advantage to deliver a quality experience that matches that of consumer electronics.

    Future-proofing the OS. One of the greatest strengths of Windows is its versatility. Windows runs on hardware ranging from netbooks to huge multi-processor servers, in roles spanning touch-enabled PCs in the home to massive databases in the enterprise. To make this possible, we build Windows on a powerful set of core services that deliver performance, scalability, reliability and security across a wide range of form factors, user scenarios, and server workloads. And, as the industry innovates in the core components of computing—such as processors, networking and storage technologies, as well as device types—investments in the core of Windows 8 ensure that Windows remains the leading way industry innovations reach our customers.

    There are several current trends that will have a particularly significant impact on the core of Windows in the Windows 8 timeframe. System-on-a-chip (SoC) vendors are delivering new offerings that enable emerging form factors such as slates to rival the capabilities of today’s PCs. Virtualization technologies are redefining the boundary between the operating system and the hardware on which it runs and the way that companies deploy and manage both their servers and clients. Further, customers tell us their use of virtualization technologies will grow dramatically. The price of commodity storage is plummeting at the same time that storage use is exploding in both consumer and business settings. And new security technologies are emerging that will affect the way customers experience computing in the future. Our investments in future-proofing the core ensure we’re well-positioned to compete with a wide range of innovative device types, take a leadership position in scalable computing architectures based on commodity hardware and devices, and strengthen Windows as a secure, trustworthy platform.

    Sitting in the audience with our guests from around Microsoft and next to SteveB, I could not resist emphasizing or highlighting aspects of the meeting, especially the videos. There was a lot of excitement but there were many questions. The biggest question from SteveB to me was “What about Office?” In our demos of all that was new, particularly with the “Tailored Web”, we did not show Office. For the whole of Microsoft enterprise business and much of the company history, the business was not just Office but Office and Windows.

    Without Office there would be no Windows and without Windows there could be no Office—that reinforcing flywheel was essential to both but also essential to the ecosystem. A new Windows without a new Office would send a message to the ecosystem that the new release was not important. Office 95 and Windows 95, Windows 2000 and Office 2000, the XP wave. When Windows didn’t have a new Office or when Office didn’t ride on the coattails of a new Windows, we struggled. Either new PCs were not seen as exciting, or developers would not know what to do with a new Windows, or enterprises would not be interested in the effort to deploy just one of the two products everyone used.

    Our answer was simple yet just as bold as the Windows plan—we needed new software from Office just like we needed new software in Windows. The existing tools would run, but the vision encompassed a new user interface, a new API for expressing apps, and a connection to online services. In addition, this API and the store for developers to use would impose a new model where an app did not take over the PC in the same way we were all used to.

    One of the more animated discussions, right then and there at the vision meeting, was about the Office Ribbon. Steve wanted to know where it was in our demos. While we were still early, we knew the Ribbon was not the path forward for touch devices or smaller screens. We knew that because the people behind the videos we were watching were the actual people who designed the Ribbon. This was not lost on me but did not seem to resonate in the VIP section at that moment.

    For me, the shift that was underway was going to be bigger than the transition from character mode to graphical interface. Just as Windows brought graphical interface while also carrying forward the existing character applications, we would bring forward existing graphical applications. We showed this time and again in the videos. The future, however, was in a new kind of application. In many ways we felt bringing forward classic Win32 applications, Intel hardware, and all of the ecosystem was an opportunity for Microsoft. It was decidedly a compatibility play and not one where we wanted Office to come to rely upon. Said more bluntly, Microsoft did not optimize the Windows experience for MS-DOS spreadsheets and word processors—they ran better than they ever had previously but it was abundantly clear new apps built on Win32 was the strategy.

    The challenging part was there was little news in the vision meeting for those in the front row watching. The past year had been a march to get to this point. The internal partners and their teams had been part of the process to get us to this point. It was, perhaps, the shock of seeing it all in one place that suddenly makes it all real.

    We were really making a very big bet. Yet one look around in the marketplace and it was obvious we had to. The bet to make Windows in the first place, announcing it one month before Macintosh was announced was equally bold. Like Apple with the iPhone, in 1983 Microsoft had nothing to lose. In 2010, there were potentially immense downsides to making a bold, but incorrect, choice for the next Windows. That made everyone a bit nervous.

    SteveB, however, is every bit the optimist and also cheerleader. He left the meeting pumped. He had a million questions, but he was definitely excited. He also wanted everything to get done sooner than we planned. Me? I just wanted to get it done when we planned.

    For the team, the meeting was fantastic. We had a vision, a plan, and a team ready to execute. Our ship date was set for April 4, 2012, under three years from the completion of Windows 7. When I look at these plans and the video today, all I can do is wonder at just how much we signed up to do and in just two years. As we’ll see, we also got most everything done and practically on schedule.

    We did not know it at the time, but PC sales were peaking—industry analysts forecasted that the 2010 rate of PC sales of 317 million units would rise to almost 500 million by the time we shipped Windows 8. We shared these estimates with the team at every step. We had to.

    Confirming our worst concerns, that never happened. Not. Even. Close.

    Emotionally, however, the external climate gradually darkened as PC sales flattened over the course of the release, and then began their decline. The forecasts of 500 million PCs selling in a year gave way to forecasts of tablets, particularly with the unit volume of Android tablets, overtaking laptop sales in two years. From the boardroom to the schoolhouse, the iPad was quickly becoming a fixture. People traveling routinely commented on the use of tablets on planes and the rise of iPhone docks and chargers, and even tablets, in hotel rooms. Cash registers, information kiosks, and even signs in elevators were rapidly becoming iPads. Smartphones surpassed PC sales due to the rise of Android—three years after the iPhone released, phones were outselling PCs and the dreams of emerging markets becoming a source for growth evaporated. Tablets as a substitute for PCs was a debate that raged on a decade later even as PC sales declined continuously and iPad sales grew. PCs were baked into the fabric of the workplace and to a lesser extent schools, but any new business capability or use cases were appearing in the context of a browser, almost always Google Chrome.

    As we began Windows 8, PCs transformed from a unit-growth business to a replacement business, in the blink of an eye, making our bet far more existential in nature. Given the way PC sales were shifting to a replacement device in business, the comfort of having a Windows 7 foundation upon which to rest—or fail—was, in fact, comforting.

    We started coding just weeks after the worldwide availability of the iPad. We were focused and on a mission. It was amazing.

    Steve Jobs appeared at the All Things D conference the first week of June 2010, celebrating the iPad hit the company created. Riding this wave of success, Jobs seemed to have a great time being interviewed by both Walt and Kara, with Kara stating that Apple had surpassed Microsoft in market capitalization. I’m sure no one paid attention to such things, ouch. I cringed at hearing that from my front-row seat, a spot I rarely took, preferring to sit in the back.

    In the span of a few minutes, Jobs made the case for iPads as the future of the laptop clear, without even using the phrase “post-PC era” that had become all too common. He dismantled our Tablet PC strategy and even relegated the Mac to legacy.

    Among the jabs, perhaps most directed at Microsoft’s unique strategy was the role of the pen and handwriting, so core to BillG’s Tablet PC vision. Jobs said, “handwriting was the slowest input method ever invented.” He said, “If you need a stylus you’ve already failed.” In his view, the Tablet PC failed because it was based on a PC, had all the expense of a PC, the battery life of a PC, all the weight of a PC, and, most of all, a PC operating system that needed the precise tip of an arrow of a cursor and mouse.

    It would not end. He was right and the tough part was we knew that ages ago, but the iPad delivered the proof.

    I sat there writing it all down. But I was also agreeing. Finally, he pointed out that a “PC operating system” can’t be used with touch, because all the apps have to be rewritten anyway.

    Bingo. That was the major lesson of Windows 7 and the entirety of the strategy for Windows 8 in one quip.

    While it was still early, the strengths and weaknesses of both the iPad and our plan were materializing. The iPad had no legacy strategy—it was a total break from the past. Importantly, Jobs defined a new operating system with a new interaction model. An OS is more, though, and for the rest of the OS—the interfaces to hardware, memory management, storage, networking, devices, and more—as he himself said three years earlier at the iPhone launch, the iPhone and thus the iPad were in fact based on a PC operating system, OS X. Apple reimagined not the whole OS (though, to be fair, as we will see, much of the guts of the OS change as well) but the user experience, or what techies call the shell. That was exactly our plan for Windows 8.

    As if to somehow call out to me to validate the plan, Jobs in his inimitable manner also shared some news that had never been shared before. The iPhone did not lead to the iPad, but rather the iPad was the original product under development years earlier. Upon seeing the technology, the phone became the product goal, but it took time for the hardware to catch up and for Apple to regain footing as a corporation. We already debated this choice, but knowing we intended to bring forward more compatibility and more openness of the platform, we started with a tablet.

    In what turned out to be a metaphor for the ages, Jobs compared PCs (including Macs) to trucks, and the iPad was the passenger car for the masses. He said:

    When we were an agrarian nation, all cars were trucks, because that’s what you needed on the farm. But as vehicles started to be used in the urban centers, cars got more popular. Innovations like automatic transmission and power steering and things that you didn’t care about in a truck as much started to become paramount in cars…. PCs are going to be like trucks. They’re still going to be around, they’re still going to have a lot of value, but they’re going to be used by one out of X people.

    The internet had a field day disassembling the metaphor in a sea of pedantry that only Reddit and Twitter can produce. People were quick to provide the history of production years of trucks versus cars, the shipments of each over time, the reality that suburban drivers buy tons of trucks, and on and on. Why? Because of exactly what Jobs had said—this was a huge transition, and it was going to make many people uncomfortable. The more someone grew up with a PC (a truck), the more difficult the transition, and the more Apple would need to prove it wouldn’t work. This debate continues today.

    Outsiders heard that Jobs thought Microsoft had missed every opportunity or made the wrong choices, and in a sense that was his point without saying so directly. What observers did not know was that Windows 8 approached the next steps with a new perspective.

    Our goal was to reimagine what a PC could be—no stylus, no Intel chips, none of the OS baggage. From the chipset through developer opportunities to the user experience, we set out to build a new future for PCs by rethinking the assumptions that constrained our every move. We wanted to deliver better products for a modern era.

    From the outside it would look like we were copying what had become Apple’s proven success, but, in reality, it was parallel evolution, at least when it came to the tablet. The one thing we were not doing was precisely what we incorrectly thought Apple would do, which was to build a new tablet out of our old OS. We were building a new OS that happened to bring along the old OS. This was exactly what Microsoft had done from MS-DOS to OS/2, from that to Windows, and from 16-bit to 32-bit then to 64-bit.

    Because of its phone strategy and execution, Apple was ahead of Microsoft. That Apple had nothing to lose because it had built such incredible success in non-PC devices left a good deal of room and no expectations for Apple to reinvent computing for a changing landscape.

    Because of its phone strategy and execution, Microsoft was behind Apple. That Windows was mired in poor execution for much of this time left little room and high expectations for Windows to reinvent itself for a changing landscape.

    Still, we were going to be late with tablets, the calendar will prove so. At the same time, we weren’t making irrational or unproven choices, nor were we simply chasing Apple. First mover advantage as a strategy has mixed results so there was a good chance our tardiness could work in our favor.

    Back when our vision meeting ended, I stood at the exit as thousands filed out. The optimism and excitement were everywhere. We could not be in a better position to execute.

    Our goal was nothing short of reimagining Windows from the chipset to the experience. The plans were bold to the point of audacious. To do otherwise was to stand still and let the new era pass us by—no way I would let that happen.

    On to 101. Reimagining Windows from the Chipset to the Experience: The Chipset [Ch. XV]



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • The launch of an innovative new product is always exciting. The launch of an innovate new product from a competitor is even more exciting. But what is it like when your main competitor launches an innovative new product at a moment of your own fundamental strategic weakness? That’s what it was like when the iPad launched on January 27, 2010. On the heels of the successful Windows 7 launch during a time when Microsoft was behind on mobile and all things internet and in the midst of planning Windows 8, Apple launched the iPad. Many would view the iPad (and slates and tablets) as “consumption devices.” Steve Jobs and the glowing press that followed the launch viewed the iPad as a fundamental improvement in computing. Whatever your view, it was a huge deal.

    This post is free for all email subscribers. Please consider signing up so you don’t miss the remaining posts on Windows 8 and for access to all the back issues.

    Back to 098. A Sea of Worry at the Consumer Electronics Show

    For months, BillG and a small group of Microsoft executives believed Apple was going to release a tablet computer. It had been rumored for more than a decade. Originally, tablet shaped computers traced their roots to legendary Alan Kay’s 1960’s Dynabook, plus there was that one on Star Trek. There’s a long-held belief among Trekkers that all Star Trek tech will eventually be realized. By 2010, Microsoft had a decade plus of Tablet PC experience, mixed as it was. With Windows 7 we brought all the tablet features into the main product instead of a special SKU, so every version of Windows could run effectively on any PC with tablet hardware, such as a pen and touch screen. What was different about the Apple rumors in 2010? What made us more nervous? Why, this time, did we believe these rumors about a company for which predictions had always been wrong? No one had predicted the iPhone with any specificity.

    Microsoft and partners had invested a huge amount of time, energy, and innovation capital in the Tablet PC, but it was not breaking through the way many hoped, such as how we visualized it in the Office Center for Information Work. The devices for sale were expensive, heavy, underpowered, had relatively poor battery life, and inconsistent quality. Beyond the built-in applications, OneNote from Office, and a few industry-specific applications pushed through by Microsoft’s evangelism efforts, there was little software that leveraged the pen and tablet. Many, myself included, were decreasingly enthused. BillG, however, was tireless in his advocacy of the device—and the fact that Apple might make one, and whatever magic Steve Jobs could bestow upon it, only served to juice the competition between companies and founder/CEOs. BillG remained hardcore and optimistic about the pen for productivity and a keyboard-less device for on-screen reading and annotation.

    To BillG a PC running Windows that was shaped like a slate or tablet seemed inevitable. For many of the boomer computer science era, the fascination of handwriting and computing on a slate had been a part of the narrative from the start. Over the past 30 years, few of the technical problems had been solved, particularly handwriting but also battery life and weight. Then came the iPhone and multitouch.

    That Apple would build such a PC was more credible than ever because of their phone, though by Microsoft measures the iPhone still lacked a stylus for pen input, something Steve Jobs openly mocked on several occasions. The possibility made us nervous and anxious, especially knowing Windows 8 was underway.

    Collectively, and without hesitation, many believed Apple would turn the Mac into a tablet. Apple would add pen and touch support to the Mac software, creating a business computer with all the capabilities of Office and other third-party software, and the power of tablet computing. The thinking was that a convertible device made a ton of sense since that allowed for productivity and consumption in one device. Plus, techies love convertible devices of all kinds.

    There were senior executives at Microsoft with very close ties to Apple who were certain of Apple’s plans and relayed those to Bill. Bill would almost gleefully share what he “knew” to be the case, using such G-2 to prod groups into seeing the opportunity for his much-loved tablet strategy.

    There were debates consuming online forums—rumors rooted in the Asian supply chain as to what sort of screens and chips Apple might be purchasing for the rumored product. Some thought there would be a “big iPod” and still others thought Apple would develop a product tailored to books, like the two-year-old Amazon Kindle. In other words, no one had a clue and people were making stuff up. Some were even calling it the iPad, not because there was a leak or anything but because it made more sense than iTablet or iSlate, and because at one point (in the late 1990s!) Microsoft had something in R&D called WinPad. The industry had not even settled on the nomenclature for the form factor, cycling among tablet, slate, pad, MID, convertible, and so on.

    This CNN story by Kristi Lu Stout from January 2010 detailed the history of tablet computers, including Apple’s own past going way back to before Macintosh.

    At least in the months prior to launch, zero people, to my knowledge, thought that Apple had in mind a completely novel approach. An aspect of disruptive innovation is how incumbents project their views of strategy on to competitors without fully considering the context in which competitors work. As much as Microsoft primarily considered Apple to be the Mac company that happened to stumble into music players and then phones, by 2010 Apple had already pinned its future and entire product development efforts to iPhone and what was still called the iPhone OS, which was based on OS X, the Mac OS, but modernized in significant ways.

    On January 27, 2010, at a special press event billed as "Come see our latest creation," Steve Jobs unveiled the iPad. I followed the happenings on the live blogs. This was one of the first Apple special events used to launch products, as the previous 2009 MacWorld was the last one in which Apple participated.

    The event took place starting with the reminder that Apple had become the world’s largest mobile device company, followed by Steve Jobs quoting, with a bit of a chuckle, an article from December in The Wall Street Journal, “The last time there was this much excitement about a tablet, it had some commandments written on it.”

    As part of his build-up to introducing the iPad, he pointed out that in defining a new category, a tablet needed to be better at some important things, better than a phone or a laptop. It needed to be better at browsing, email, photos, video, music, games, and eBooks. Basically, everything other than Office and professional software it seemed to me—though this would come to be known as “creation” or “productivity” by detractors who would posit that the iPad was a “consumption” device. As we will see, the Microsoft Office team was already hard at work at bringing Office apps to the iPad.

    The launch event deliberately touted “latest creation” in the invitations, which I always thought was a bow to creativity as a key function. What many pundits and especially techies failed to appreciate was that productivity and creativity had new, broader, definitions with the breadth of usage of computers as smartphones. Productivity and creativity were no longer the sole province of Word, Excel, Photoshop, and Visual Studio. The most used application for creating was email and it was already a natural on the iPhone, only soon to be replaced by messaging that was even more natural.

    As the presentation continued Jobs delivered his first gut punch to the PC ecosystem in describing what such a device might be, as he set up a contrast for what the new category should do relative to netbooks.

    “Some people have thought that that’s a Netbook.” (The audience joined in a round of laughter.) Then he said, “The problem is…Netbooks aren’t better at anything…They’re slow. They have low quality displays…and they run clunky old PC software…. They’re just cheap laptops. (more laughter)”

    Ouch. He was slamming the darling of the PC industry. Hard.

    The real problem was not only that was he right, but that consumers had come to the same conclusion. Sales of Netbooks had already begun their plunge to rounding error.

    Jobs unveiled the iPad, proudly. Sitting in a le Corbusier chair, he showed the “extraordinary” things his new device did, from browsing to email to photos and videos and more. The real kicker was that it achieved 10 hours of battery life—a flight from San Francisco to Tokyo watching video on one charge, recharged using your iPhone cable. It also achieved more than 30 days of standby power and like a phone, it also remained connected to the network in standby, reliably downloading emails and receiving notifications. This type of battery management was something the PC architecture struggled endlessly to achieve. The introduction concluded with a series of guests showing specially designed iPad apps in the 18 months old App Store, now with over 140,000 apps.

    The eBook-specific apps really got under our skin given how much this had been the focus of many efforts over many years. Being a voracious reader, BillG championed eBooks for the longest time. Teams developed formats and evangelized the concept to publishers. Still Microsoft lacked a device to comfortably read books. Then there was Steve Jobs reclined on an iconic chair.

    Games were the icing on the cake of despair given Microsoft’s efforts on both the Xbox and PC.

    But there was no pen! No stylus! Surely, it was doomed to be a consumption device.

    Then they showed a paint program that could be used with the touch of a finger. They were just getting started.

    There was no productivity! Doomed, for sure. Then they showed updated versions of the iWork suite for the iPad. The word processor, spreadsheet, and presentations package for the Mac had been rewritten and tuned specifically to work with touch on the iPad. Apple even intended to charge for them, though this would later change. Those tools had already been stomped by Mac Office, but they became unique on this unique device. The ever-increasing quality of the tools, particularly Keynote for presentations, quietly became a favorite among the Mac set in Silicon Valley.

    All these apps being shown were available in the App Store. They were curated and vetted by Apple, free from viruses, bad behavior, and battery draining features.

    Rounding out the demonstration was the fact that the iPad synchronized all the settings, documents, and content purchased with iTunes with a cloud service. This was still early in Apple’s confused journey to what became known as iCloud, but as anyone who tried to sync between a phone and a PC then learned, that it worked at all was an achievement.

    The iPad also came with an optional cellular modem built in—on a PC one would need a USB dongle costing a couple hundred dollars and an elaborate software stack that barely worked, plus a monthly $60 fee. On the iPad, there was an unlimited data plan from AT&T for $29.99. Apple and AT&T also made it possible to activate the iPad without going to the store or calling AT&T. Minor, perhaps, but this is the kind of industry-moving innovation Apple almost never gets credit for achieving what was impossible to do uniformly on the PC. Even today, mobile connectivity on a PC is at best a headache.

    The pricing was also innovative. Apple had previously been called out as a high-priced technology vendor and for a lack of an appropriate low-price product response to Netbooks. There was no doubt the iPad would be portrayed as expensive. In fact, after drawing out sharing the price, Jobs announced it would start at $499, a shockingly low price point which was close enough to Netbook territory. The price went to $829 fully loaded with storage and 3G, which remained the same as many loaded Netbooks. The price was hardly OLPC but it was low, with $499 viewed as a magic price at retail.

    The product would be available in all configurations in 90 days worldwide.

    I promptly ordered mine. I also ordered the keyboard dock described below.

    It was all so painful. Each time Jobs said “magical” I thought “painful.”

    There were so many things iPad hardware did that the PC could not do or had been trying and failing to do for so long that suddenly made all the difference: incredibly thin and light, all-day battery life, wonderful display, low-latency touch screen, 3G connectivity, multiple sensors, cameras, synchronizing settings and cloud storage, an App Store, and so much more. My favorite mind-blowing example was the ability to easily rotate the screen from portrait to landscape without any user interface action. It just happened naturally. At a meeting with Intel, the head of mobile products took an iPad out and spun it in the air yelling at me with his thick Israeli accent “when will Windows be able to rotate the screen like this?!” My head hurt. All of this made possible because the iPad built on the iPhone. Yes, it was a big phone, but it proved to be so much more. It had so much potential because of the software.

    It also had productivity software, and, to finally rub in the point, the first iPad even had a desktop docking station with a keyboard attached. Jobs didn’t need to address the complexity of adding a keyboard, but having a keyboard that actually worked without the touch screen keyboard popping up and getting in the way was an important technical breakthrough and also one rooted in using the iPhone adapted OS X operating system kernel while also using a new platform for application software. This was one of many subtle points we picked up on that showed the foresight of the underlying strategy. It was obvious the keyboard dock was an “objection handler” and not a serious effort, but it motivated an ecosystem of keyboard folios and cases for the iPad until Apple itself finally introduced innovative keyboard covers.

    The conclusion of the presentation reminded everyone that 75 million buyers of iPhones and touch iPods already knew how to use the iPad. There was no doubt from that moment that the future of the portable computer for home or work was an iPad or iPad-like device—the only questions were how long it would take to happen and how much Windows could thrive on simply supporting legacy behavior.

    It was, as Jobs said, “The most advanced technology in a magical and revolutionary device at an unbelievable price.”

    The international magazine coverage of the iPad launch was mind-blowing. It was pure Steve Jobs, the genius. The Economist cover featured Steve Jobs as a Messiah-like character with biblical text over his head “The Book of Jobs: Hope, Hype and Apple’s iPad” as he held the iPad tablet not unlike Moses. Time, Newsweek, Wired, and just about every tech publication that still printed on paper did a cover story. The global coverage squarely landed the message that the iPad was the future of computing.

    From the PC tech press, the announcement drew skepticism. The iPad was, marginally, more expensive than dying Netbooks. It lacked a full-size keyboard for proper productivity. It didn’t have a convenient USB slot to transfer photos or files (that was the most common way of sharing files at the time). The use of adjectives like “full” or “proper” or “truly” peppered the reviews when talking about productivity. This was all strikingly familiar as it sounded just like the kind of feedback the MacBook Air received from these same people.

    There was endless, and tiresome, commentary on how there could not be productivity without a mouse, a desktop, and overlapping windows (generously called multi-tasking which is technically a misnomer.) The irony was always lost on the person commenting—there was a time when the PC did not have a mouse and, in fact, the introduction of the mouse was viewed as a gimmick or a toy entirely counter to productivity. Or the fact that for most of the Windows history, the vast majority of users ran with one application visible at a time just like on the iPad. I collected a series of articles from the 1980s criticizing the mouse. Fast forward to 2010, replace mouse with touch, and these read exactly the same. It was as if we had not spent the past three years debating whether one could use a smartphone with only a touchscreen.

    Also, there were no files. How could anyone be productive without files? The iPad turned apps using a cloud into the primary way to create and share, not files and attachments. Apple would later add a Files app. Kids today have no idea what files are.

    When they weren’t making fun of the name “iPad” many were quick to mock the whole concept of an iPad as a simply puffed-up iPhone. Ironically, the mid-2010 iPhone was still a tiny 3.5” screen; it would not be until late 2014 when Apple would relent and introduce a larger screen iPhone and a year later would introduce a 12.9” iPad. Apple in no way saw the iPad as simply a larger iPhone.

    Walt Mossberg set different tone in with his review. “Laptop Killer? Pretty Close—iPad Is a 'Game Changer' That Makes Browsing And Video a Pleasure; Challenge to the Mouse.” Among the positive commentary, Mossberg said of the iPad Pages word processor, “This is a serious content creation app that should help the iPad compete with laptops and can import Microsoft Office files,” and, “As I got deeper into it, I found the iPad a pleasure to use, and had less and less interest in cracking open my heavier ThinkPad or MacBook. I probably used the laptops about 20 percent as often as normal.” He concluded with a reminder that this was going to be a difficult journey. “Only time will tell if it’s a real challenger to the laptop and Netbook.”

    Apple sold about 20 million iPads in the first year (2010 to 2011) while we were building Windows 8. As it would happen, 2011 was the all-time high-water mark (through this writing) for PC sales at 365 million units, or about 180-200 million laptops. The resulting iPad sales were not a blip or a fad in the portable computing world—10% of worldwide laptop sales in the first year! In contrast, Netbook sales fell off a cliff and all but vanished as quickly as they appeared. Each quarterly PC sales report was skittish as sales growth was slowing. At first the blame was put on the economy, or maybe it was a shortage of hard drives or the lack of excitement from Windows. It would be a while before the PC industry absorbed the impact of phones and tablets and later Google ChromeBooks. There was a brief respite during the first year of the global pandemic and work-from-home, but that too quickly subsided. It is expected that 2022 PC sales (including Google Chromebooks) will be about 300M units.

    The iPad and iPhone were arguably the most existential challenges Microsoft had ever faced. While the industry focus and Apple’s business were on the devices, the risk came from the redefinition of operating system capabilities and software development and distribution. Apple had created a complete platform and ecosystem for a new and larger market. This platform came not at a time of great strength for Microsoft, but a time when we were on our heels strategically fighting for our relevancy. The Windows platform had been overrun by the browser, including the recent entry from Google that would soon eclipse Internet Explorer as well as the recently announced ChromeOS. PC OEMs were in rough shape financially and had an uncertain future. Windows Server, as well as it had been doing, had failed to achieve leadership beyond the (large) enterprise market. Linux and open source dominated the public internet. Amazon Web Services and cloud computing consumed the energies of academia and start-ups. Windows Phone 7 had not yet shipped and Windows Phone 6.5 was being trounced by Apple from above and Android from below. Even hits from Xbox to SQL Server were not actually winning in their category but were distant second place.

    If ever “It was the best of times, it was the worst of times” applied to a company, the summer of 2010 was it for Microsoft. When it came to financial success Microsoft was in a fantastic position, but strategically and in “thought leadership” we were in a weak position.

    Perhaps Microsoft could have made better bets earlier and we squandered what could have been a potential lead? In October 2011, almost two years since the iPad was available, I received an email from a corporate vice president of public relations. He relayed a message from a reporter asking about an unreleased Microsoft tablet with the codename Courier. The reporter wanted to know, “Why did Sinofsky kill it?” According to the reporter, the Courier project had been going on since before the iPad was released (this was important) and it was only just now, 18 months after the iPad was available, it became clear that Sinofsky killed it.

    Oh no, they found out, I thought. Crap.

    The problem was that I had never seen this project. I’d hardly heard of it other than rumors of just another random project in E&D operating under the radar that had been cancelled more than a year earlier. There was a small story in Gizmodo tech blog when the project, which was a design sketch/prototype was cancelled shortly after the iPad announcement in April 2010, “Microsoft Cancels Innovative Courier Tablet Project”:

    It is a pity. Courier was one of the most innovative concepts out of Redmond in quite some time. But what we loved about Courier was the interface and the thinking behind it-not necessarily its custom operating system.

    Courier was developed in E&D, Entertainment and Devices division, where Xbox and Zune were developed. It was, apparently, a design for a dual-screen, pen-based tablet. A conceptual video rendering of the device leaked, and the internet was very excited. It looked to be the dream device to techies—it had two screens and folded like a book! As fascinating as it was, all anyone, myself included as far as I knew, had to judge it by was an animation. The first time I saw the animation was on the internet just before the project was cancelled.

    That video leaked (by whom?) two months prior to the iPad announcement during the height of tablet rumors even before the CES show with Windows 7 tablets and early Android tablets. Gizmodo, who above praised it as one of the most innovative concepts coming out of Redmond, described the Courier interface relative to their favorite interpretation (or “what everyone expects”) of Apple tablet rumors:

    The Courier user experience presented here is almost the exact opposite of what everyone expects the Apple tablet to be, a kung fu eagle claw to Apple's tiger style. It's complex: Two screens, a mashup of a pen-dominated interface with several types of multitouch finger gestures, and multiple graphically complex themes, modes and applications. (Our favorite UI bit? The hinge doubles as a "pocket" to hold items you want move from one page to another.) Microsoft's tablet heritage is digital ink-oriented, and this interface, while unlike anything we've seen before, clearly draws from that, its work with the Surface touch computer [the tabletop described earlier] and even the Zune HD.

    In hindsight, one just knows that Apple got a huge kick out of this device and the quasi-strategic leak from the team.

    More importantly, I had to pick up the pieces with the PC OEMs who read the articles about this device and wondered if Microsoft was competing with them or undercutting their own efforts at new Windows 7 tablets. In the leadup to the first CES with Windows 7 and our work on touch and tablets, trying to generate support across the ecosystem, this kind of leak was devastating. Aside from the appearance of hiding important details from partners, it looked like one part of Microsoft was competing with our biggest customers or worse that the Windows team was part of the duplicity. In his CES 2010 keynote, SteveB had to call out and bring special visibility to a new tablet from HP just to smooth over the relationship due to the Courier leak months earlier. There was non-stop scrambling from the time of the Gizmodo leak until he stepped off stage. HP, as the largest OEM, was (and remains) directly responsible for billions in Windows revenue, and thus profit. Months later, HP bought Palm and its WebOS software for $1.2 billion with every intent of creating a tablet with its own operating system. It would not be unreasonable to conclude that HP pursued Palm because of the Courier project even with the history of OS development at HP. Our OEM partners thought we were bad partners.

    The group of Microsoft influencers on the internal email discussion group LITEBULB thought we were foolish for cancelling what would certainly have been the next killer device. One person said, on a long discussion thread contrasting Courier with money-losing Bing and Xbox, “In my view, our apparent unwillingness to lose money on a few innovative, sexy products that people drool over is part of the reason we are losing the public perception battle to Apple and Google.”

    Courier became a shorthand or meme for incompetent management at Microsoft. Given the climate around the company and a decade of a relatively flat stock price, such internal discussions contributed to a growing narrative of the death-of-innovation at Microsoft. For so many reasons it was readily apparent even if materialized as a real PC, Courier would have been as doomed as all the Tablet PC products before it, and even more so because of the dual-screen approach. My own team mostly thought I was foolish for not even knowing about the existence of Courier.

    I looked unaware or dumb, or both, and along with all the powers at Microsoft had stifled innovation.

    We thought this story was over and then more than two years after that leak we received the inbound indicating I was the culprit and the reporter awaited comment. When the video of the leak first surfaced, I sent an email trying to understand if the Courier project was related to Windows 7. Was it for sale? What version of Windows 7 was it running? Did OEMs know? I became concerned that there was a modified source tree of Windows 7 floating around and how would that be released and supported? Was this another Tablet PC or Media Center code fork and potential mess? I was told it would run Windows 7, but heavily modified. Xbox was originally the Windows source code so now knowing the team I became concerned that playbook would be used. We had just cleaned all this up with the release of Windows 7, but this new situation could turn into a real mess. We just spent three years on Windows source code and sustaining engineering hygiene, so this better not take us back from where we came, I opined. It was messy, but as it turned out the budgeting and management processes within E&D led to the demise of Courier and had nothing to do with the concerns I expressed after the fact.

    My reaction was to just tell the reporter we would go on record that I did not have any role in “murdering” the Courier project and had no knowledge of it until I saw the leaked video. I said as much in email to the communications team. There was no shaking the reporter who was certain of his sources and simply concluded this was some sort of misdirection from me and Microsoft. Whatever. A decade later whenever a dual screen device shows up in the market the Windows fans return to Courier and remark “what could have been” and that acts like an SEO (search engine optimization) effort to join my name with killing innovation.

    We looked bad from every direction: internal innovation watchdogs, OEM partners, as an executive staff lacking a coherent strategy, and most of all with our own Windows team. The innovation or lack thereof narrative at Microsoft was dutifully fed by cancelling this project, which given our experience with Windows 7 would of course not have been successful. There were so many problems. The entire experience shows the problem with demos, leaks, press sources, and what it is like to try to do (or not do) new things in the context of a broader narrative.

    It is a decade since Courier was cancelled and people still hope to bring back this project and many still see it as symbolic of the company’s tendency to stifle innovative projects. Surface released the Microsoft Surface Duo device, running a modified Google Android as a somewhat puzzling approach. Courier routinely shows up on lists of innovative projects killed by mismanagement. There’s some irony in that latter view given Microsoft’s penchant for continuing to pursue products long after they have failed to achieve critical mass, well-beyond the requisite three versions.

    The debates within the team about the iPad were there. Would it replace or augment a laptop? Was it a substitute for a PC and users continue to do what they did on a PC, or would people use it as an alternative to do new things in new ways? Would developers build apps specific for the larger iPad screen? Would it ultimately be limited to consumption as many predicted or become a creative and productive tool? Would third-party developers optimize apps for the iPad? Would PC OEMs make a good tablet with the software we would design, or not? This last question was critical.

    Amazingly, these debates continue today even with something like a half billion active iPads. The convergence of today’s Apple Silicon Mac and iPad make the debate either more interesting or exceedingly moot.

    Back in the realm of what the team could control and execute, we began planning Windows 8 in the Fall of 2009. We saw the disappointments at the Consumer Electronics Show followed by the surprise of the iPad announcement. Our plans came together in the Spring of 2010. For the new Windows team, it was a magical planning effort.

    On to 100. A Daring and Bold Vision

    What did you think of the iPad?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • The planning for Windows 8 was moving right along. But something wasn’t right as we wrapped up Windows 7 activities at CES 2010. It was looking more and more like the plans and the way the ecosystem might rally around them would yield a watered-down result—it would be Windows and a bunch of features, or perhaps irreconcilable bloat. The way the ecosystem responded to touch support in Windows 7 concerned me. How do we avoid the risk of a plan that did too much yet not enough? Oh, and Apple scheduled a “Special Event” for January 27, 2010, just weeks after a concerning CES.

    Back to 097. A Plan for a Changing World [Ch. XIV]

    In early January 2010, I was walking around the show floor at CES the evening before opening day as I had routinely done over the years. CES 2010 was a mad rush to build a giant city of 2,500 booths only to be torn down in 4 days. This walk-through gave me a good feel for the booths and placement of demonstrations. It was just two months after the launch of Windows 7. Walking around I made a list of the key OEM booths to scope out first thing in the morning. It wasn’t scientific but visiting a booth had always been an interesting barometer and sanity check for me compared to in-person executive briefings. Later in the week I would systematically walk most of the show and write a detailed report. The next morning, along with the giant crowds, I made my way through a few dozen booths with the latest Designed for Windows 7 PCs mixed in among the onslaught of 3D-television controlled by waving your hands which garnered much of the show’s buzz.

    The introduction of touch screens was a major push for Windows 7 and there was genuine excitement among the OEMs to offer touch as an option, though few, okay none, thought it would be a broadly accepted choice. Touch added significantly to the price. With two relatively small suppliers for the hardware, OEMs were not anxious to make a bet across their product line. TReller, the new CMO and CFO for Windows, made a good decision to provide strategic capital to one component maker to ensure Dell would make such a bet.

    There were touch models from most every PC maker, but they were expensive—most were more than $2,500 when the typical laptop was sub-$1,000. For the OEMs this was by design. If a buyer wanted touch there was an opportunity to sell a high-end, high-margin, low-volume device. OEMs had been telling us for months that this was going to be the case, but it was still disappointing.

    Taking advantage of Windows 7, and wholeheartedly, were a sea of Windows 7 “slates” all based on the same design from the combination of Intel and Pegatron, an ODM. These slates were essentially netbooks without keyboards. They fit the new Intel definition previously described—MID or mobile internet device. They were theoretically built as consumption-oriented companions to a PC. They were shown reading online books, listening to music, and watching movies, though not particularly high resolution or streaming given the meager hardware capabilities. All of them were relatively small and low-resolution screens. To further emphasize the Intel perspective, they also launched AppUp for Windows XP and Windows 7, a developer program and early content store designed to support rights management and in-app purchase as one might use for books and games.

    The buzziest slate was from Lenovo, not a product announcement, but a prototype model kept behind glass at the private Lenovo booth located in a Venetian Hotel restaurant. The “hybrid slate” Lenovo U1 was a 11” notebook that could also be separated from the keyboard and used as both a laptop and a slate. As a laptop, the Windows 7 PC used a low-power Intel chipset, a notch slightly above a netbook. Detached as a slate, the device ran a custom operating system they named Lenovo Skylight based on Linux running on a Qualcomm Snapdragon chipset. The combination weighed almost 4lbs. The Linux tablet separated from the Windows-based keyboard PC somewhat like the saucer section of the Starship Enterprise separated from the main ship. Lenovo built software to sync some small amount of activity between the two built-in computers, such as synchronizing bookmarks and some files. Economically, two complete computers would not be the ideal way to go, bringing the cost to $1000. Strategically for Microsoft this was irritating.

    Nvidia, primarily known for its graphics cards used by gamers, was always an interesting booth. Nvidia was really struggling through the recession and would finish 2010 with revenue of $3.3 billion and a loss of $60 million. As it would turn out it was also a transformative year for the company, and for one of the most legendary founders in all of the PC era, Jensen Huang. To put Nvidia in context, my very first meeting with Intel when I joined the Windows team in 2006 was about graphics, because of the Vista Capable fiasco. Intel was digging their heels in favoring integrated graphics and was not at all worried about how their capabilities were so far behind what Nvidia and ATI, another graphics card maker, were delivering, which Intel viewed as mostly about games. The rub was that AMD, Intel’s archrival, had just acquired ATI for $5.4 billion making a huge bet on discrete (non-integrated) graphics, which was what Nvidia focused on. Intel seemed to believe that the whole issue of graphics would go away as OEMs would simply accept inferior graphics from Intel because it was cheaper and easier while Intel improved integrated graphics over time, albeit slowly. A classic bundling strategy of “a tie is a win” that would turn out to be fatal for Intel and an enormous opportunity for ATI/AMD, Nvidia, and Apple. Intel could have acquired Nvidia at the time for perhaps $10-12B if that was at all possible.

    In their booth Nvidia was showing off how they could add their graphics capabilities to the Intel ATOM processor and dramatically speed it up. Recall, ATOM chips struggled to even run full screen video at netbook screen resolutions. With Nvidia ION it was possible to run flawless HD video. Nvidia made this all clear in a “Netbook Nutrition Facts” label that they affixed to netbooks running Nvidia graphics. This was a shot across the bow at Intel, but Nvidia had an uphill battle to unseat bundled graphics from Intel. Nevertheless, we were acutely aware of the strong technical merits of Nvidia’s approach, Jensen’s incredible drive, and the needs of Windows customers. This will prove critically important in the next chapter as we further work on non-Intel processors.

    More disappointing was how the OEMs chose to demonstrate touch capabilities in Windows 7. We provided OEMs with a suite of touch-centric applications, such as those demonstrated a year earlier—mapping, games, screen savers, and drawing, for example. We even named it Windows Touch Pack. The OEMs wanted to differentiate their touch PCs with different software and viewed the Touch Pack we provided as lame. Touch had become wildly popular in such a short time because of the iPhone, and the iPhone was a consumer device used for social interaction, consumption of media, and games.

    With that frame of reference, the OEMs wanted to show scenarios that were more like an iPhone. The OEMs were going to do what they did, which was to create unique software to show off their PCs. This is just what many in the industry called crapware or, as Walt Mossberg coined in his column, craplets. To the OEMs this was a value add and important differentiation. Worst of all, there was no interest from independent software makers who had dedicated few, if any, resources to Win32 product updates, let alone updates specific to touch that was exclusive to the new Windows 7. We never intended to support Windows 7 touch on earlier versions of Windows—something developers ask of every new Windows API. This compatibility technique we often relied on but was a key contributor to Windows PC fragility and flakiness, also referred to colloquially as DLL Hell as previously described.

    The overall result: Windows 7 provided no impetus for third party-developers, and we failed to muster meaningful third-party support for any new features, including touch.

    What I saw was a series of new OEM apps taking a common approach. These apps could be called shells in typical Microsoft vernacular, an app for launching other apps—the Start menu, taskbar, and file explorer in Windows constitute the shell. In this case, these new shells were usually full-screen apps that had big buttons across the top to launch different programs: Browsing, Video, Music, Photos, and YouTube. These touch-friendly shells did not do much other than launch a program for each scenario, the browser, or simply a file explorer. For example, if Music was chosen then a music player, created by the OEM or chosen because of a payment for placement, would launch to play music files stored on or downloaded to a PC. The use of large touch buttons was thought to give the PC user interface a consumer feel. The browser used part of Internet Explorer but not the whole thing, just the HTML rendering component; for example, it did not include Favorites or Bookmarks usually considered part of a browser. This software, as well intentioned as it was, would fall in the crapware category. This was expected, but still disappointing.

    In booths with mainstream laptops we were offered a glimpse into the changing customer views of what made a good laptop. The Windows 7 launch was just a short time ago, so most laptops had incremental updates, primarily with the new version of Windows and perhaps a slight bump in specs. Some show attendees picked up the laptops and grimaced at the weight and thickness—these machines were hardly slick. It had been two full years since the introduction of the MacBook Air and the PC industry still did not have a mainstream Windows PC that fit in the famous yellow envelope wielded on stage by Steve Jobs. The prevalence of Wi-Fi in hotels and workplaces had changed the view that a work PC needed all the legacy ports present on Windows computers, and the fear of dongles had faded. Most every Windows PC still shipped with an optical drive, significantly increasing the height of the PC. Show-goers were openly commenting on the size, weight, and lack of portability of Windows PCs compared to the MacBook Air, which was what many were fully lusting for and based on the numbers had already switched to. For the two years since the MacBook Air launch the PC makers were pre-occupied with netbooks.

    Apple broadened their product line with an even smaller and lighter MacBook Air using an 11.6” screen. They also added MacBook Pro models that were more in line with Windows PCs when it came to hardware, for example they included optical drives and discrete graphics. These laptops were expensive relative to mainstream business PCs running Windows. For example, the Dell XPS 15 debuted in 2010 was praised by the press, though only relative to other PCs. Compared to MacBooks, the Dell lost out for a variety of reasons including noise, keyboard feel, trackpad reliability, screen specs, and more. In reviewing the new Dell XPS, AnandTech, a highly regarded tech blog, said:

    We've lamented the state of Windows laptops on numerous occasions; the formula is "tried and true", but that doesn't mean we like it…what we're left with is a matter of finding out who if anyone can make something that truly stands out from the crowd.

    Of course, if we're talking about standing out from the crowd, one name almost immediately comes to mind: Apple. Love 'em or hate 'em, Apple has definitely put more time and energy into creating a compelling mobile experience.

    If only there had been a Windows PC like MacBook Air for Windows 7. The closest we had was the low-volume and premium priced Sony VAIO Z that was new for 2010. This model featured a solid-state drive like the MacBook Air but was much heavier and larger than the famous envelope and could easily top $3000 when fully specified.

    This knocked the wind out of me. I was happy that there were Windows 7 PCs at the show. But seeing the reaction to them only reinforced my feeling that the ecosystem was not well. I was hardly the first Windows executive to bemoan the lack of good hardware from partners, especially hardware that competed with Apple. Six years earlier, before the MacBook Air but also around CES, my predecessor JimAll sent a polemic, intentionally so, to BillG, Losing our way…in which he said, “I would buy a Mac today if I was not working at Microsoft.” As with many of these candid emails, this one made its way to an exhibit in a trial—it added nothing to the case, but such salacious mails that have little by way of legal implications are often used to attempt to gain leverage for settlements or stir emotions in a courtroom.

    By the time the show ended, I wrote up my CES 2010 trip report as I had been doing since about 1992 or so. I loved (and still love) writing up these reports. No matter how down I was or how boring I thought the show was, I amped up the excitement as though it was the first CES I ever attended. I think it is important to do because otherwise the aging cynicism that seeps in—often seen in the press covering the event—makes it too easy to miss what might be a trend. My report ran 20 pages with photos and covered everything from PCs of all shapes and sizes to 3D television to iPhone accessories (lots of those.)

    Not lost on everyone at the show was how Apple loomed over the show even though it had zero official presence. The vast majority of accessories located the edges of the Sands Exhibition Center were cases, docks, cables, and chargers for the iPhone. The camera area was filled with accessories to turn an iPhone into a production camera, such as a Steadicam for an iPhone that I even mocked in my report, oops that was way off base. The real news, however, was that the gossip of a forthcoming Apple tablet was everywhere. Nearly every news story about the show mentioned the one tablet that wasn’t there.

    The week before CES in a classically Apple move, invites were sent to the press for an Apple Special Event to be held January 27, 2010. Leading up to the event, rumors swirled around how Apple would introduce a new touch computer and eventually converged on a 10” tablet to be based on iPhone OS. As we see today, the rumors were wildly off base, and only days before did the rumors mostly match the eventual reality.

    After the first day of CES and our contribution to the main keynote by SteveB, that evening, I was in a mood and not a good one. I skipped yoga (at the awesome Vegas HOT! studio) and a previously planned celebratory team dinner that I arranged (!) to write a memo. Maybe memo isn’t the right word. It was another dense thought-piece from me, a 6000-word SteveSi special for sure. I think JulieLar, MikeAng, and ABurrows are still angry with me for missing that dinner.

    I knew mainstream PCs were selling well. PCs from Dell, Acer, and HP met the sweet spot of the market, even among a marketplace overdosing on netbooks. People were over the moon for Windows 7. Office 2007 was doing quite well, despite not having compatibility mode, which had receded to a non-issue. Windows Server and associated products were going strong even though cloud computing had taken over Silicon Valley. Even Bing was showing signs of life six months or so into its rebranded journey, which had been managed by Satya Nadella (SatyaN) for the past few months. These all made for the bull case that Microsoft was making progress.

    For customers and the tech industry, well, their attention had by and large moved to non-PC devices, Apple versus Google in phones, and the potential new tablet form factor. Microsoft launched the HTC HD2 running Windows Mobile 6.5 at CES. It was a well-received and valiant effort while Windows Phone 7 progress continued even in a decidedly iPhone world.

    I remained paranoid, Microsoft paranoid. The dearth of premium PCs to compete with Apple and the lackluster success with touch gave me pause. Even though we had tried to right all the wrongs of introducing new hardware capabilities with new approaches, we had failed. There were no Windows 7 touch apps. There never would be. It was totally unclear if the industry would ever come together to create a MacBook Air, and even if it did it was unlikely to have the low cost, battery life, and quality over time of a Mac, even though it too was running on Intel. The browser platform consumed most all developer attention, with iPhone and then Android drawing developers from the browser. Apple’s App Store, 18 months old, had 140,000 apps up from 500 at launch, growing at a rate of 10 to 20 percent per month. To put that in perspective, during the Windows 7 beta I shared what I thought was an astoundingly large number—883,612 unique Windows applications seen across the massive beta test. By mid-2022, the Apple store had more than 3.4 million apps!

    A sense of dread and worry came over me when I thought about our converging Windows 8 plans. It wasn’t a panic attack. It was more me wondering if I failed to give clear enough direction to the team about how big a bet we were willing to make on Windows 8. Was I too subtle, which I was definitely known to be, and gave too much room for the plan to turn into an “and” plan? An “and” plan means we would do everything we would have normally done, “and” also take into account all the new scenarios. Such plans are easy to sign up for because they don’t involve tradeoffs. At the same time, they force no decisions and lacked the constraints that yield a breakthrough design.

    An “and” plan is what I saw across the CES show floor and even from Intel. It is a plan where the PC is a Windows PC and the new stuff, as though the new stuff is just another app one could take or leave. The extra launcher shells, content stores, and touch interface were bolted on the side of Windows and did not substantially alter the value proposition that was under so much pressure. Related, an “and” plan would also presume a traditional laptop or desktop PC as the primary tool, which itself would prove as limiting down the road as it was right then.

    We needed a plan with a point of view about how computing would evolve. That point of view needed to be built upon the assumption that Win32 was not the future, or even the present. We needed to realize that an app and content store were of paramount importance. We needed to account for shifting customer needs in their computing device. We needed a plan that assumed the web is the defining force that changed computing and smartphones were themselves changing the web. We needed a plan that substantially changed the futzing, viruses, malware, and overall fragility of the PC so it acted more like a consumer electronics device. Most of all we needed a plan to engage developers in a new way to become interested Microsoft’s solution this opportunity, and not the solutions from Apple or Google.

    We had the ingredients of a plan that might be adequate to quell the forces of disruption. Would executing such a plan be possible within the context of Microsoft? What does a plan look like when it also involves Windows itself?

    The Windows 8 framing memo I wrote and the planning memo and process JulieLar had started contained the core of a point of view. We had set out to investigate “alternative hardware platforms,” potentially “distributing applications in a store,” designing “a modern touch experience,” and much more. As interesting and innovative and outside the box as these were, they all suffered the same risk. The problem was that for the team they were viewed as additions to Windows, as add-ons to the next release, as nice to have, an addition to. We had—and this is where the discomfort came from—created a classic strategy of the incumbent when faced with disruption. Our strategy treated the forces of disruption—in this case mobile hardware platforms, modern user interface, and the app ecosystem—as incremental innovations in addition to what we had. We were thinking that we would be competitive if we had Windows and other features.

    This is not how disruption works.

    Disruption, even as Christensen had outlined a decade earlier, is when the new products are difficult for the incumbent to comprehend and are a combination of inferior, less expensive, less featured, and less capable, but simultaneously viewed by customers as superior replacements.

    As a team, we had faced this before, and I had spent the better part of three releases of Office fending off those claiming Office was being disrupted by inferior products built on Java, components, or browsers. In all those instances, I stuck to what I believed to be the case and defended, effectively, the status quo until we took the dual risks of expanding Office to rely on servers and services and then redefining the user experience for the product. Patience paid off then. The new technology threats were immature at best and a distraction at worst.

    What gave me the confidence to believe, as they say, this time was different? Was it my relative inexperience in Windows of just one release? Naivety? Arrogance? Was it envy of Apple or Steve Jobs?

    It was none of those things.

    It was simply the combination of the shortcomings I was seeing on the CES show floor and the fact that we’d all been using iPhones. We were not talking about a theoretical disruption where someday Java Office could have all the features people wanted in Office or that someday all the performance and UI issues would be squeezed out of the browser. The disruption had already happened, but the awareness of that was not shared equally by everyone involved.

    To use a phrase attributed to William Gibson, “The future has arrived—it’s just not evenly distributed yet.”

    Windows had 100% share of the Windows PC market and 92% share of the PC market. iPhone and Android (phones, and soon other form factors) were making a compelling case that the Windows position, as a percentage of computing, was declining. This decline in computing share would only accelerate.

    In technology, if you are not growing then you are shrinking.

    Holed up in my hotel room while the rest of the team had dinner, I banged away at my netbook keyboard a screed about the future of the web, apps, and how consumers would interact with content—and, importantly, ways in which the PC was deficient. I wrote 6,000-plus words, making the case that the iPhone was so good the internet was going to wrap itself around the phone rather than the phone becoming a browser for the existing internet—the web would tailor itself to the phone via apps. The experience of the web was not going to be like it previously had been—the iPhone was not going to deal with the desktop web and attempt to squeeze it on to the phone. The browser itself would become a form of legacy, “the 3270 terminal” of the internet era. The 3270 is the IBM model number for a mainframe terminal, replaced by PCs and applications.

    As the office workers I gave first PCs to in 1985 did not realize, the business forms they filled out with their Selectric typewriters would change to work on a PC, not that the PC would get good at filling out carbon paper forms. A key part of a technology paradigm shift is that the shift is so significant that work changes to fit within the new tools and paradigm. There’s only a short time when people clamor for the new technology to feel and behave like the old.

    I sent the memo to a few people that night. It received a bit of pushback because it felt like what we were already doing, only restated. I kept thinking this was how disruption really happened—teams went into denial, say “We got this” and believe adding a bit more would make the problem (or me) go away. IBM added a 3270 terminal emulator to the original PC for this reason, but that didn’t make people want a PC more or even use it; rather, they complained that the terminal seemed more difficult to use than a standard 3270 terminal. The PC I most frequently installed in the summer of 1985 was the IBM PC XT/3270, a hybrid mess if there ever was one.

    It was clear we were falling into the trap of thinking incrementally when the world around us was not only changing but converging. The Windows team could no longer think of modern mobile platforms as distinct from the PC. People’s technology needs were being met by phones and were moving away from the PC.

    This was not a call to turn the Windows PC into a phone. Rather this was a chance to up-level the PC, to embrace the web, embrace the app store model, embrace the shifting hardware platform and associated operating system changes, and embrace internet technologies for application development.

    Most of all it was about “apps” except I couldn’t quite find the right word to use. Apple talking about “apps” for the past couple of years really bugged me. When I was in Office, we were the Applications (or Apps) group at Microsoft and an app was just the techie word for another more broadly used techie word, program. Word and Excel were apps and I was trying to define a new kind of app. App was such a weird word that even our own communications team didn’t think we should use it. Lacking a better word that night in the hotel, I called the memo “Stitching the Tailored Web” which was a horrible name and worse metaphor that took a lot of time for me to explain to people in person.

    The key ingredients of this new world as I described it were a computing device, content store, and the tailored web itself. The computing device encompassed an interaction model for the device and for the new tailored apps. The content store is where users obtained apps created and distributed by developers along with consumable content such as movies, books, and music. The tailored web included the APIs used by developers and the OS platform, the new tools required by developers, as well as the core technologies of Internet Explorer used to create these applications and to browse the traditional web. All three came together to create what I described as the more “consumable internet.” I even had a fancy diagram I made with PowerPoint to illustrate the point.

    My main point was that mobile phones were so huge and such a force that even the internet would change because of phones, the iPhone in particular. The notion of going to web sites would be viewed as quaint or legacy compared to apps. Today, there are plenty of people who do not take this point of view to be where we landed. There are those that use desktop PCs and live in a desktop browser and defiantly believe that to be the only way to get real work done. They use the highest-end PC tools such as Visual Studio, Excel, AutoCAD, and Adobe Premier and did not (and still do not) see phones or mobile software replacing those any time soon. Yet there is no disputing the vastness of mobile. Depending on the source (and country) about three-fourths of all web traffic is mobile-based, and about 90% of time spent on a mobile device is in an app. With this dramatic change in how the web was used, it is fair to say that the web became a component of mobile, not the other way around. The browser-based Web 2.0 juggernaut would be subsumed by the smartphone.

    The most dramatic expression of this would not happen for another 2 years. In the summer of 2012, Facebook, the posterchild of Web 2.0, dramatically pivoted from desktop browsing as the primary focus for the Facebook experience to the Facebook mobile app. That legendary pivot is often touted as both remarkable and responsible for the success and reach the company subsequently achieved.

    The Tailored Web memo was much better delivered as a passionate call to the team. I waited until just after Apple’s scheduled event and later that same day held a meeting with our senior managers (the 100 or so group managers across all of Windows, each the leader of dev, test, pm for a feature team.) This was my favorite meeting and one I held at least quarterly going back to the late 1990s in Office.

    Working with Julie, this became our own version of a pivot. It was not as much a pivot as simply a focusing effort. We still had about three months until the vision needed to be complete. This was by no means a major change. Rather it was relaxing the constraint that Windows 8 had to do two things, an “and” project, and it could focus on making sure we were set up for the future. The call to action was a call to prioritize the new over the legacy and interoperability between the legacy and new was not a priority. We wanted to break away from what was holding us back, a legacy neither of our primary competitors dealt with.

    To the team, I wanted to make the case about apps by showing a series of screen shots of web sites versus Apps on various platforms and rhetorically asked “Which would you rather use?” Facebook on iPhone or a desktop? Crowded, ad-filled Bing on a desktop or the Bing search app on Windows Mobile? Outlook or the Mail app on iPhone? Reading a book in a browser or on a Kindle? A traffic map on a desktop browser or in the Microsoft Research SmartFlow traffic app?

    The answer was obvious for all of us and remains obvious even today. The call to action was a doubling down on the developing planning themes “Defining a Modern PC Experience” and “Blending the best of the Web and Rich Client” with an emphasis on consumption and the internet. Specifically, it was “Windows, in all form factors, needs to be the best platform and experience to consume internet content while enabling ISVs and IHVs to build the software and hardware to do that.”

    It was early in the potential for this disruption, and it was easy to point to thousands of tasks, features, scenarios, partners, business models, ecosystems, and more that required a PC as we knew it. Every situation is different, but it is that sort of defensiveness that prevented companies like Kodak, Blackberry, Blockbuster, and more from transitioning from one era to another.

    It is easy to cycle from anxiety to calm in times like this. It was easy for me to look around and find affirmation of the “and” path we were on. Every stakeholder in the PC ecosystem would be far happier with incremental improvements to the status quo rather than embarking on huge changes with little known upside and significant immediate cost and potential downside. It would have been easy to fall back on the financial success we had and leave dealing with a technology shift for the future when technology specifics were clearer. It would, however, be too late as we now know. We would also lose the opportunity to lead a technology change that had just begun.

    Worrying seemed abstract. Developing a clear point of view and executing on that seemed far more concrete and actionable.

    The announcement from Apple and their tablet turned out to be far more than our poorly placed sources had led us to believe. What was it like in the halls of the Windows team on that day, January 27, 2010? It was not magical for us.

    On to 099. The Magical iPad



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Welcome to Chapter XIV. This is the first of two chapters and about a dozen remaining posts that cover the context, development, and release of Windows 8. Many reading this will bring their own vivid recollections and perspectives to this “memorable” product cycle. As with the previous 13 chapters and 96 posts about 9 major multi-year projects, my goal remains to share the story as I experienced it. I suspect with this product there will be even more debate in comments and on twitter about the experiences with Windows 8. I look forward to that. This chapter is the work and context leading up to the plan. Even the planning process was exciting.

    Back to 096. Ultraseven (Launching Windows 7)

    In the summer of 2012, I was sitting across from BillG at the tiny table in the anteroom of his private office on the water in Kirkland. The sun was beaming into my eyes. In front of me was one of the first boxes of Microsoft Surface RT, the first end-to-end personal computer, general-purpose operating system, and set of applications and services designed, engineered, and built by Microsoft. In that box sat the culmination of work that had begun in 2009—three years of sweat and angst. After opening it and demonstrating, I looked at him and said with the deepest sincerity that this was the greatest effort and most amazing accomplishment Microsoft had ever pulled off.

    Later that same week, I had a chance to visit with Microsoft’s co-founder, Paul Allen (PaulA), at his offices at the Vulcan Technology headquarters across from what was then Safeco field. I showed him Surface RT. I previously showed him Windows 8 running on desktop using an external touch monitor. At that 2011 meeting he gave me a copy of his book Idea Man: A Memoir by the Cofounder of Microsoft and signed it. Paul was always the more hardware savvy co-founder, having championed the first mouse and Z80 softcard, and had been pushing me the whole release of Windows 8 on how difficult it would be to get performance using an ARM chipset and on the challenges of hitting a low-cost price point. Years earlier, Vulcan built a remarkably fun PC called FlipStart, which was a full PC the size of a paperback novel. Surface RT with its estimated price of $499, $599 with a keyboard cover, and a fast and fluid experience, resulted in the meeting ending on a high note. I cherished those meetings with Paul. I also shared with Paul, proudly, my view of just how much we should all value the amazing work of the team.

    What I showed them both was the biggest of all bets. While not “stick a fork in it” done, by mid-2012 Microsoft seemed to have missed the mobile revolution that it was among the first to enter 15 years ago. In many ways Surface RT set out to make a new kind of bet for Microsoft—a fresh look at the assumptions that by all accounts were directly responsible for the success of the company. Rethinking each of those pillars—compatibility, partnerships, first-party hardware, client-computing, Windows user-interface, and even Intel—would make this bet far bigger and more uncomfortable than even betting the company on the graphical interface in the early 1980s. Why? Because now Microsoft had everything to lose, even though it also had much to gain.

    With Windows 7, we knew we had a traditional release of Windows that could easily thrive through the full 10-year support lifecycle as we had seen with Windows XP. Windows 7 would offer a way to sustain the platform as it continued to decline in relevance to developers and consumers, while extracting value from business customers with little incentive to change.

    Microsoft needed a new platform and a new business model for PC makers, developers, and consumers. The only rub? Any solution we might propose wasn’t something we could A/B test or release pieces at a time experimentally. Windows was the standard and wildly successful. It wasn’t something to experiment on.

    While the company was 100 percent (or more) focused on Windows 7, we had started (drumroll for the codename please) Windows 8 planning five months earlier. We had the next step, a framing memo from me first. Then we had a planning memo from JulieLar, ready to go as soon as we all caught our collective breath after the release—no real time to celebrate more, and certainly no downtime.

    Technology disruption is often thought of at a high level along a single dimension, but it is far more complex. Consider Kodak’s encounter with digital photography, Blockbuster’s battle with DVD-by-mail, or the news business’s struggle with the web. Great memes, sure, but one layer down each is a story of a company facing challenges in every attribute of their business, and that is what is interesting and so challenging.

    Digital photos were more convenient for consumers, that was true. But also, the whole of Kodak was based on a virtuous cycle of innovation developed by chemical and mechanical engineers, products sold through a tightly controlled channel, and an experience relying on a 100-year-old tradition of memorialized births, graduations, weddings, and more. At each step of technology change to digital images, another major pillar of Kodak was transformed, sliced, and diced in a way Kodak could not respond. The magical machinery of Kodak was stuck because not one part of its system was strong enough to provide a foundation. Kodak didn’t need to also enter the digital market. The only market that would come to exist was digital. The only thing that made it even more difficult was that it would take a decade or more to materialize as a problem, and that during that time, many said, “Don’t worry. Kodak has time.”

    While figuring out what to do next for Windows, we saw Blackberry facing a “Kodak moment.” Blackberry was not just a smartphone, but a stack of innovations around radios, a software operating system optimized for a network designed for small amounts of data consuming little power, a business model tuned to ceding control to carriers, and a keyboard loved by so many. In 2009, Blackberry still commanded almost 45 percent of the smartphone market, even though the iPhone had been out since 2007. That led many to find a false comfort in the near term and to conclude Blackberry would continue to dominate. Apple’s iPhone delivered a product that touched every pillar of Blackberry, not only Blackberry the product but Blackberry the company. Sure, Apple also had radio engineers, but they also had computer scientists. Sure, Apple also worked with AT&T, but Apple was in control of the device. And Apple was, at its heart, an operating system company. Blackberry had some similar elements up and down the entire product stack, but it wasn’t competitive. At its heart, Blackberry was a radio company. Blackberry seemed to have momentum, but market share was declining by almost 1 percentage point per month.

    The smartphone, iPhone and Android in particular, was disrupting Windows. Though, not everyone thought that to be the case. Some said that phones did not support “real work” or “quality games.” The biggest risk to a company facing disruption is to attempt to dissect disruptive forces and manage each one—like add touch or apps to a Blackberry with a keyboard. The different assumptions and approaches new companies take only strengthen over time, even if eventually they take on characteristics of what they supplant. The iPhone might never be as good as the PC at running popular PC games, but that also probably won’t matter.

    I’d lived through graphical operating systems winning over character mode, PC servers taking over workloads once thought only mainframes could handle, and now I found myself facing the reality that browsers could do the work of Windows, relegating Windows to a place to launch a browser and not even Microsoft’s.

    Every aspect of the Windows business faced structural challenges brought on by smartphones. Compounding the challenge, Apple was competing from above with luxurious and premium products vertically integrated from hardware to software. Google, with Android and Chrome, was competing from below with free software to a new generation of device makers. The PC and the struggling Windows phone were caught in the middle, powerless to muster premium PCs and unable to compete with a free open-source operating system.

    The browser had already shifted all new enterprise software development away from the hard-fought victory of client-server computing. Anyone who previously thought of building a new “rich client” application with Visual Basic or some other tool was now years into the transition to “thin client” browser software. The Windows operating system was in no way competitive with smartphone operating systems. The way to develop and sell apps had been reinvented by Apple. The Win32 platform was a legacy platform by 2009, the only debate was how long that had been the case. A legacy platform does not mean zero activity, but it does mean declining and second or third-priority efforts.

    The partnership between the Windows operating system and hardware builders had fundamental assumptions questioned by Android. OEMs supporting Android not only received the source code but were free to modify it and customize it to suit their business needs. That was exactly what Microsoft fought so hard against in the DOJ trial. The touch-based human interaction model developed by Apple and the large and elegant touchpad on Mac were far more approachable and usable than what was increasingly a clunky mouse and keyboard or poor trackpad on Windows. The expectations for a computer were reset—a computer should always be on, always connected, never break or even reboot, free from viruses and malware, have access to one-click apps, while easy to carry around without a second thought. That supercomputer in your pocket is connected to built-in operating system services providing storage, backup, privacy, security, and more. Like Kodak, many said to slow down, that change would not happen so quickly. Many said we could “add this” or “change that.” But it was not that simple, nor was it going to be that easy. The market was moving quickly.

    The march to the next billion shifted, in the blink of an eye, to smartphones and soon non-Windows tablets, entirely skipping the PC. The entrenched PC customers were as excited as ever to upgrade to Windows 7 by buying a new PC or upgrading their existing PC. Everything new in Windows 7 was viewed through the lens of the past. How did we used to do that and did Windows 7 make it easier? Those skipping the PC didn’t concern themselves with improvements in a new version of Windows because they didn’t know about the old version. PC enthusiasts were still asking for features to handle managing files and keeping multiple PCs in sync while phone or browser users were seamlessly connected to a cloud and gave no thought at all to where files were stored. Buying a second iPhone or replacing a broken one proved astonishingly simple, because of the integration with services.

    Microsoft’s answer to iPhone and Android 1.0 was still being developed. Windows Phone 7, originally codenamed Photon, would be released a year after Windows 7. The mobile team was heads down trying to get that release done, which had been in the works since Windows Mobile 6.0 shipped in early 2007. By the time Windows Phone 7 shipped, the Windows 8 team would have completed the first of three milestones. The next full chapter will detail the challenges this lack of synchronization created. We had Windows Phone 7 on a mission to get to market and compete and in doing so was pre-booking the next release; Windows 8 attempted to plan a major release that in theory would lend support to the mission of modernizing the codebase which was desperately needed by Windows Phone 7 competing with the modern Linux kernel and OS X.

    Meanwhile, there were few new customers being brought to the cash-producing business of the Windows PC. There we were with the metaphorical Sword of Damocles hanging above us. The best days of the PC were behind us. It wasn’t just obvious technically—the value propositions of mobile devices were in no way compatible with the Win32 and PC architectures. The only question was when things would truly turn for the worst.

    The risks were infinitely high. We could do something radically different, potentially choosing wrong and heading right into failure even accelerating what was inevitable. Or we could slow walk as long as possible, releasing incremental features for as long as possible hoping to find some future solution to our quagmire. Doing both is a classic upper management answer that with near certainty results in a combination of a bloated and confused near term that keeps trying to pull features from the new, and a brand new product that develops a major case of second-system syndrome. The latter is a classic defined in the standard developer book Mythical Man-Month, and what we experienced with numerous Microsoft projects from Cairo to Longhorn when the next generation attempts to solve all known problems at once.

    It is why so many default to preserving the cash cow as the most logical strategy, maintaining the success already achieved. Usually this comes with raising prices along the way for existing customers, who are by and large captive. In the short term, things continue to look great, and a crisis will be averted while the crisis is left to a future team. In fact, in the short term our most ardent fans would continue to carry on as though everything is normal. They are part of the same bubble as us.

    Either way, the world outside of the Windows PC shifted to mobile. We were at a fork in the road and had to pick a direction.

    The thing about disruption when it is happening is that you alternate between over-confidence and paranoia. Financially, the Windows PC was secure. Emotionally, the Windows PC was fragile. From a product perspective, not only was there little left to do on the current path but doing anything would annoy both the disinterested customers and ardent fans more than it would encourage them or any new buyers. Over-confidence leads one to incremental plans, assuming the existing business is so strong as to not worry. Paranoia makes it difficult to identify the precise nature of disruption and to calibrate a response.

    For all the insights that Innovator’s Dilemma captured, the one thing it did not seem to predict was that even after disruptions companies can continue to make vast sums of money for years to come from those disrupted products. Technologists tend to think disruption causes products to sort of go away, but as any private equity investor will tell you, that is not the case. That’s why a misplaced confidence in incrementalism almost always wins out over a bold, risky bet, at least for a while.

    We were determined to find a way to do better than to slowly lose relevancy as we’d seen happen to IBM.

    Planning Windows 8

    We kicked off a planning process just as had been done for all the other releases described in this work—from Office 2000 all through Windows 7. By now this team had fifteen years of product planning and results that had transformed a set of bundled applications into a suite of interconnected products and then to a platform of products, servers, and services, and then reinvented the user experience setting these tools up for at least another decade. We then aimed this same process at the challenges of the Windows business, and by all accounts set that product up for another decade.

    About three months before Windows 7 released to manufacturing, I sent out the first step in planning Windows 8, the obvious name. “Building on Windows 7” outlined the state of the business and product, suggested where we would invest for the next project, detailed competitive threats, and defined what success would look like.

    As with all the other framing memos I wrote, this one began upbeat. How could it not? The release of Windows 7 was a super positive experience for the team, the company, and the broad ecosystem. The memo celebrated all we accomplished as a team and with Jon DeVaan’s leadership significantly improving our engineering capabilities and productivity. Promise and deliver.

    Then the reality. The recession had slowed the growth of PC sales and analysts were taking forecasts down. The damage netbooks had done to pricing leverage was significant, resurrecting the Linux desktop and lengthening Windows XP in market. Netbooks also masked the secular weakness in PCs that was also seeing as lengthening of the PC refresh cycle in large companies. PCs were being tasked to last longer without loading a new version of Windows or Office. Emerging markets and software piracy were proving highly resistant to most every effort. The sabbatical I took living in China in 2004 taught me firsthand just how impossible it was going to be to thwart piracy of desktop Windows and Office, our primary sources of revenue. Windows Live was, as we say, making progress, but Google was making much more progress much faster, and our marquee products, Hotmail and Messenger, were losing share and fast. Releasing SkyDrive (now OneDrive) with excellent integration of the new Office Web Applications (Word, Excel, PowerPoint running in the browser with compatibility with desktop Office) was a significant bright spot, but it came with very high operational costs and immense pushback from Office that feared cannibalization. There were even changes in how PCs were being sold that were proving highly problematic. OEMs were increasingly relying on nagware or trial products, bloating the PC user interface, and putting a ton of pressure on the overall experience of Windows. The retail channel, struggling from the recession, was anxious because they had not yet mastered selling smart phones, the hot category, which increasingly looked like a carrier-specific play. This in turn caused retailers to become part of the equation of pre-loaded software, thus further eroding the experience.

    Competitively, all I asked us to focus on was Apple and Google. Most would see Apple as a competitor through OS X, but as discussed OS X was now powering iPhone. The framing memo was not a plan nor was it even a plan for a plan, rather it serves to bound the release and help encourage the team to focus on specific technologies, competitors, and challenges. There was much for the team to collectively learn. At the time, I summarized Apple competition as follows:

    Apple. Apple OS X is a very strong product coupled with amazing hardware. The transition Apple has made from OS 9 to a modern OS architecture on Intel hardware is on par with the transition we made to both the NT code base and to 64-bits. From an OS technology perspective we see the strength of OS X among universities and administrators who find the Mac (Mach-based) kernel and associated shell, tools and techniques “comforting”. From a user experience perspective, we cannot be defensive about the reality that Mac hardware + OS X + iLife continues to be the standard by which a PC + Windows 7 + Windows Live will be judged in terms of technology, and then [sic, how] the purchase experience, post-sales experience, and ecosystem have grown to be considerable strengths. While we have only some details, the hygiene work being done on Snow Leopard is likely to generate significant positive views as the OS becomes “leaner and more streamlined” and likely claims about being more modern with respect to graphics, 64-bits, and user-interface. As we describe below, the sharing of code and architecture, particularly for important strategic elements of the OS, between the iPhone OS and Leopard is technically interesting and certainly responsible for some elements of the platform success. Apple is not without blemishes, but in planning Windows 8 we must focus on their strengths and assume they will continue to execute well on those. Being deliberate and informed about competing with Snow Leopard and, relative to iPhone, making sure we build on the assets of Windows for our Windows Phones should be strongly considered.

    Rather than describe Android directly, I chose to consider Android as another variant of Linux. At the time, the only Android phone was the convertible Blackberry-like device, the HTC Dream. Android 2.0 with full support for multitouch would not release until Windows 7 launched to market about three months later. Note the caution about Google control of Android expressed even then—it was obvious this would be a tension point down the road. In this context the competition with Linux also included desktop Linux:

    Linux. Like so many competitors we have faced as a technology company, just as we thought “this one won’t reach critical mass” we saw two events provide a breath of life into “Linux on the desktop”. First, the rise of Ubuntu, both the technology/packaging and the “movement”, has created a rallying cry for OEMs and for the Linux community. Second, the low-cost PCs that initially came with Linux (due to the footprint “requirements”) created a new incentive for OEMs/ODMs to find a way to make it work. While we were effective in establishing a value proposition for Windows XP on these PCs, the seed that was planted will continue to be revisited by PC makers and designers. They appreciate the potential for business model innovation, componentization, tailoring, and also the opportunities to differentiate using the open source aspects of the OS. It is worth noting the reality that Linux on the server continues to dominate in many important workloads and the server plans are going to inform the planning of the base OS work such that we are extremely focused on defining specific scenarios where we make significant competitive progress with Windows 8 against Linux on the server. Within the context of Linux it is especially important to call out Google Android which will likely be funded by Google for some time and represents an OS choice for mobile phones and phone architectures encroaching on the PC “from below.” Android can provide OEMs with the opportunities similar to Ubuntu, however Google is walking a careful line in providing Android where they can possibly lower costs for OEMs, or even subsidize the bill of materials for a device, counterbalanced by OEM wariness that Google will take too much control of additional revenue streams.

    Perhaps most importantly were the two “non-traditional” competitors: “Browsers” and “Phones/Alternative Architectures.” The rise of Google Chrome was proving highly problematic, not just because of the loss of share, but because of Google’s determination to add anything and everything from a PC to a cross-platform browser runtime, which is still going on to this day.

    The view of the phone is much more interesting considering how the world evolved. Speculating about Android would prove accurate in just a few short months. I was deeply concerned about the PC ecosystem and the potentially rapid convergence of PC and phone hardware in the direction of phones, as well as Apple’s unified OS strategy. This was in stark contrast to where Microsoft began its mobile journey a dozen silicon-years earlier and where it was today.

    Phones and alternate architectures. The iPhone is referred to by many as a new form of portable computer—“it has a real OS” many have said in reviews. The Google G1 phone running Android is likely to be made available in more PC or “handheld” form factors beyond the single-handed in-your-pocket screen. The desire for the ultimate device that has the power and capabilities of a PC along with the convenience of always on wireless connectivity is beyond alluring. It is what we all want as consumers. To deliver on such a vision many might say that Windows can never power such devices—perhaps that is a statement about “business model” or a statement about technology. Sometimes it is just competitors declaring “[Big] Windows will never be on a phone”. The reality is there was a time in the history of Windows phones (CE, PocketPC, etc.) where the synergy between “little” Windows and “big” Windows was technically robust in reasonable ways. For a variety of reasons we diverged. Today we have hardware designers and phone company customers facing a choice between the OS that supports phone networks, voice, and other “phone” scenarios super well, but does not have the rich ecosystem support of Windows 7 and runs on a small set of hardware chassis, and Windows 7 running on a mainstream hardware platform with broad ecosystem support and openness. These platforms differ in bill of material costs, power consumption, and so on. Much like the “browser is all I need” statement there is a significant amount of extrapolation that both excludes Windows and assumes the Windows platform cannot compete. Our chip partners for Windows are working hard on bringing the x86 architecture “down” and we need to be there with Windows software. And at the same time we will strongly consider how to run Windows on alternate hardware platforms and learn what that would entail—we will work to bring “big” Windows to mobile chipsets, but we have significant groundwork to do before we know the practicality of such an investment. Our Windows 8 planning will most certainly take into account the role of sharing new code across Windows and Windows Phones, starting with the latest Internet Explorer as we are already working on.

    There were several technology bets in the framing memo, putting stakes in the ground for how we would think about significant efforts on the deeper technical challenges of the release including a big effort to evolve the PC. While I don’t want to fast-forward, these abstract definitions are what will lead to the work on ARM processors (see emphasis above), a new application model, and ultimately first-party hardware.

    The technology bets on evolving the PC included:

    * Mobile Devices

    * Converged OS between Windows PCs and phones

    * Mainstream PCs and the market turn to laptops

    * High-end PC form factors such as meeting room PCs

    * Shared computing PCs such as we were seeing with remote desktop and cloud computing

    Taken together with the work on cloud, the memo defined the “Modern Windows Experience” as including the following (the original even included a fancy Office Art depiction). The use of “modern” will become a touchstone of the release and how we describe Windows 8:

    * Base operating system: the OS kernel, device management, connectivity, and storage

    * Graphics, Presentation, and Interaction: the visual aspects of Windows including the APIs used by applications and the end-user experience

    * Browsing: the technologies required for browsing the web, but also how those will be reused across the platform to build rich applications

    * Windows “Form Factor” Experience: tuning the experience for different types and sizes of computers from handheld through laptops and desktops to servers and alternate form factors such as tablets and wall computers

    * Windows Live: the broad set of cloud-based services that complete the Windows experience including identity, communications and messaging, storage, and sharing

    As with past releases the framing memo went out to the whole Windows team. The bulk of the team was super focused on shipping Windows 7 and mostly appreciated the informational aspect of the memo. The real work for program management began with JulieLar’s “Windows 8 Planning Memo” which she sent to the entire team just after Windows 7 RTM for most worldwide markets. The purpose of the framing memo was to establish the “bounding box” for Windows 8 and bring together the best of top-down, bottom-up, and middle-out planning. The framing memo was the top-down step. The planning memo develops potential themes for the release and explores those.

    Julie outlined the following planning themes which would then be the structure used to facilitate brainstorming, prototyping, and then ultimately the product vision:

    Blending the Best of the Web and the Rich Client. Recognizing the declining role of Win32 development and the increasing dominance of web development and associated tooling, this area asked to develop solutions to building an engaging and useful platform for developers. What kinds of apps do they want and how would they be distributed? How could the capabilities of Windows enhance what we see as limited web applications?

    Defining a Modern PC Experience. Julie wrote, “The basic elements of today’s Windows user experience—the Desktop, Taskbar, Start menu, and Explorer—were introduced in Windows 95, and their success has made Windows the world’s most familiar computing environment. But today’s modern world is in many ways different from the mid-1990s world in which Windows 95 was designed.” This area asked nothing less than to bring the Windows user experience up to a level that would support existing scenarios and provide a better solution for the next billion customers who see the smartphone as their first and potentially primary computing experience.

    Extending the Reach Of Windows. The Windows model of licensing and OEMs served us extraordinarily well, but with the rise of smartphones there were new revenue opportunities and sales models for Windows that can be supportive of reaching the next billion customers. Key among these was the role of mobile operators (“telcos”) in the sales process.

    Connecting to Windows From Anywhere. Typically, the PC of 2010 operated in a world where everything is on the PC except what is viewed through a web-browser. PCs could be greatly improved by making use of cloud services available with Windows Live to make all your files, settings, communication, and collaboration easy from any PC or device. What role could the cloud play? Synchronization across devices was something the company has tried many times before though we had limited success. How could it work this time?

    Helping IT to Deliver Work Anywhere Infrastructure. Most of Microsoft’s revenue and much of the upside opportunity came from doing a fantastic job for enterprise IT. This theme asked questions about deployment, security, encryption, and even the mundane such as how to easily replace a lost, stolen, or broken PC.

    Showcasing Quality the First 30 Days and Over Time. This area asked what have historically been the most intractable of questions about Windows. “The burden of ensuring that a new PC is running as well as it should is placed on the customer who purchased it. As a result, the first days of usage, rather than being a period of exploration and fun, were often prove to be labor intensive and exasperating.” Julie added, “Making matters worse, Windows itself is not running at its best during the first days of ownership. …Windows performs all of its initial self-tuning and post-out-of-box-experience tasks during this critical time.” When compared to what customers were starting to see with smartphones, Windows looked downright archaic.

    Synthesizing these together into the vision was the next step. While most of the planning team were brainstorming and prototyping, we had some significant engineering work going, the success of which would prove crucial in enabling our ability to achieve a breakthrough Windows 8, such as working out the feasibility of Windows on ARM.

    A bigger worry, did we set out to do too much? We knew that the team would naturally gravitate to ideas that made the PC better in incremental ways. We also knew some people would push hard on extremely bold ideas. The risk for any large organization is obvious…trying to do too much. The less obvious risk…ending up with a plan that is a too much of the old and a little bit, but not enough, of the new. Inertia is one of the most powerful forces in a large company.

    I was worried. We only wanted to spend a few months “just” planning, still an enormous undertaking. Windows 7 shipped in August, with this planning memo and associated work taking most of the Fall. It was looking like we would start the project in the Spring of 2010, which meant an aggressive RTM of Windows 8 in the summer of 2012. Still, that would be exactly three years between releases, something never accomplished before. Promise and deliver.

    On to 098. A Sea of Worry at the Consumer Electronics Show



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • In the era of “boxed” software release to manufacturing was a super special moment. The software is done, and the bits permanently pressed onto a DVD disc. That disc, the golden master, is then shipped off physically to duplicators around the world and then combined with another artifact of the era, a box or in the case of Windows 7 a plastic anti-theft DVD contraption. While Windows 95, the excitement of computing and the newness of internet set a high-water mark for launch events, the completion and launch of Windows 7 was a major worldwide business event. The industry was looking for optimism as we emerged from the Global Financial Crisis and the ensuing slump in PC sales. Windows 7 was just the ticket and the launch would prove to be part of a massive uptick in PC sales or as some hoped a return to ongoing up and to the right curves. But could that really be the case?

    Back to 095. Welcome to Windows 7, Everyone

    The months after the PDC were extremely intense. We had set out to promise and deliver, but the success of the PDC had managed to inflate expectations. These were not false expectations—the use of the product was widespread and broadly satisfactory. That success is what raised expectations. PC makers, Wall Street, OEMs, and enterprise customers knew the product to deliver and were not just impatient.

    We made a significant number of changes from M3 to beta. With our improved engineering system changes were made in a controlled though collaborative manner. Each change was discussed by many people and then the code change reviewed—no holes punched in the wall. With each passing day it was more difficult to make changes while we aimed for stability of the beta. The most important thing about shipping a beta is not that it is perfect but that it ships in a known state. If something isn’t right that’s okay, as long as it’s known. In the case of Windows 7, we knew work and bugs remained but were highly confident that millions of people would try out the beta and have a great experience.

    That methodical crawl to beta went on for weeks, each day making fewer changes and calmly making it to sign-off. Then it was time to ship the beta.

    On January 8, 2009, at the Consumer Electronics Show in Las Vegas, SteveB announced the availability of Windows 7 Beta. The venue and the announcement from the CEO made this a significant worldwide event. It was covered on CNN, BBC, and more. That was exciting and even felt a bit like old times for a moment.

    I watched from back in the green room because we were getting ready to turn on the web site for download and had no idea what to expect. While the internet was old news, downloading a gigabyte DVD image was hardly routine, especially from home and not something the internet was yet equipped to handle reliably.

    To have some sense of control, we set a limit of 2.5 million downloads. Back then, before everyone had gigabit internet at home, a massive gigabyte download was something that would stress out the internet. As the keynote was going on we watched downloads begin. They quickly reached our limit while the keynote continued. A few calls to Redmond and we removed the throttle and began to rewrite our press releases with ever-increasing numbers. We extended the beta downloads through the end of January and had many more millions of installs than downloads as the download made it to all sorts of alternate and backup sites. We also learned a lesson in distributed computing that day.

    For the beta we issued unique product registration keys which became the scarce resource. We soon removed the limits on activating those keys as well. While the download site was structured to choose 32- or 64-bit along with locale to then generate a key, many figured out the URL that went directly to the 2.5GB download and passed that along. We just didn’t want to be overwhelmed with Watson and SQM data so capped the release at 2.5 million. That was silly but at least we received an indication of the excitement. There was a lot!

    Every day we tracked bugs with Watson and observed usage with SQM. Hardware vendors were providing updated device drivers that were anxiously downloaded by millions of testers, many seeing new drivers arrive automatically by Windows Update. More new PCs would arrive to be qualified. More legacy hardware would be retested. More of the over 100,000 apps in the wild would be checked for compatibility. More enterprise customers would tell us that they were anxious to deploy Windows 7.

    Many reviewers chose to review the Beta as though it was final or at least something regular people might care about. It would be easy to gloss over this but for me it was an important part of promise and deliver. It had been a very long time, perhaps never, when a first beta for Windows was considered broadly usable and also had customers asking if it was okay to deploy even more broadly. Promise and deliver.

    David Pogue, hardly a fan of Windows, practically filled an entire page of The New York Times with his review “Hate Vista? You May Like Microsoft’s Fix” where he concluded “For decades, Microsoft's primary strategy has been to put out something mediocre, and then refine, refine, refine, no matter how long and no matter what it costs, until it succeeds. That's what's exciting about the prospect of Windows 7. It's Windows Vista - with a whole heck of a lot of refinement.” Microsoft was back to making sure it got products right.

    In The Wall Street Journal, Walt Mossberg said in another front page of the business section review “Even in beta form, with some features incomplete or imperfect, Windows 7 is, in my view, much better than Vista, whose sluggishness, annoying nag screens, and incompatibilities have caused many users to shun it. It's also a serious competitor, in features and ease of use, for Apple's current Leopard operating system.” He also posted a video review as part of some of his more recent work creating video reviews.

    All we needed to do was finish.

    By July 13, 2009, build 7600 was pronounced final and signed off on by GrantG and DMuir, the test leaders for WEX and COSD.

    Windows 7 was ready for manufacturing.

    July 9, 2009 was also my 20th anniversary at Microsoft and the team help me to celebrate by dressing like me—jeans, v-neck sweater, t-shirt. At least I was predictable.

    Shortly thereafter at a sales kickoff in Atlanta, the annual MGX, we surprised the global field sales force and created a media moment when SteveB, KevinT (COO), and I held up a “golden master” DVD (a gold DVD), symbolically signing it as we announced that Redmond had signed off on the Windows 7 RTM build. It was a release that 10,000 people worldwide had contributed to and would likely end up on over 1 billion PCs. It was complete with another photo of me looking uncomfortable celebrating with SteveB on stage at the sales meeting. The sales meeting was always a country-mouse/city-mouse moment for me.

    Just as soon as that excitement was over, Microsoft announced what turned out to be the worst earnings in corporate history with a 17% drop in quarterly revenue in July 2009 the end of the fiscal year. The tough part was we knew this was coming while we were on stage, which made our celebration much less about the past and more about hope for the future. While many would blame the economy or Vista and some would even cite the recently announced Google Chrome OS (the predecessor to the Chromebook), the truth was much more secular in nature. PC makers were struggling on the bottom line as the 40 million netbooks as exciting as they were lacked profit. The astronomical rise in netbook unit sales discussed in the previous chapter led many to assume a bullish future. In fact, netbooks masked a secular decline in PC sales. We would know more as the year progressed as new PCs were sold with Windows 7 during holiday season 2009.

    After the celebration, the team collectively exhaled. It was August and time for vacations, but we didn’t have a lot of time to waste. Come September, we had to get the team fully engaged to plan what was coming next or it would be a massive effort to regain momentum.

    Our team of administrative assistants outdid themselves with a wonderful ship party held on the activity fields in Redmond. In contrast to other Windows events, I would say this one was less eventful and even comparatively subdued, but still enormously fun for the team. We had custom cakes and cupcakes, tons of food, family-friendly games, craft beers contained to a two-drink limit within an enclosed area, and even Seattle hometown favorite The Presidents (of the United States) as the special musical guest. They took their popular song “Cleveland Rocks” and wrote the lyrics as “Microsoft Rocks.” I still have a bootleg recording of that.

    The competitive issues we were facing weren’t going away. The organization was about to change and clarify responsibility for dealing with those.

    Launching Windows 7

    We had a fantastic foundation to build upon and all we needed to do was deliver an odd-even result—meaning a good release after the Vista release—and each of our core constituencies would breathe a sigh of relief. Looking outward, however, it was obvious the world was a very different place than when we started Windows 7. It was not clear any of those constituencies or even our own team were prepared.

    There was still a product to officially launch, but not before some realignment at the top of the organization.

    Bill Veghte (BillV), who had started at Microsoft right out of college and had overseen Windows 7 marketing through to launch planning, decided after two decades with the company that he wanted an opportunity to run a business end-to-end and announced his intention to depart Microsoft. After a transition he would join Meg Whitman at Hewlett Packard. With that, SteveB wanted to put all of Windows under one leader and asked me to do that. He really wanted to elevate the job title which I pushed back on because of the way Microsoft was structured (and remains so) we did not really have what most consider true ownership of a business. Nevertheless, that was the origin of the job title of divisional president.

    One of my first tasks was to hire a marketing leader to take over from BillV, one who would best represent the collaborative culture we aimed to create. I wanted to bring finance and marketing together under one leader because the Windows business uses billions of dollars in pricing actions to fund marketing through OEMs. Tami Reller (TReller) had been the finance leader for the Windows business, reporting to the corporate CFO. When she joined Microsoft 10 years earlier following the Great Plains acquisition where she had been leading marketing. I got to know her then as the acquisition fell under my then manager, Jeff Raikes (JeffR). She was the perfect combination of marketing and finance leadership for a business where those went hand in hand and brought a great deal to our management team.

    Microsoft wanted (needed) a big launch for Windows 7 and so did the industry. As had become a tradition for me, I wanted to spend the launch in Asia while my peers led the US event. My connection to my Microsoft family in Japan, China, and Korea ran deep and the business for Microsoft in those countries was huge. I couldn’t be in all places at once, so I chose to attend the launch in Japan. No one loves a retail launch more than Japan.

    I arrived two days before the October 22, 2009 launch. I never worked harder at a launch event. From first thing in the morning (easy because of the time change) until well past midnight (well supported now with Modafinil) because of excitement at retail stores, we did press visits, interviews, broadcasts, met with customers, and more. We’d shuttle from event to event in a Japanese microvan—all of us in our blue suits and ties with a stack of name cards.

    The above are two YouTube videos from Japanese Windows fans who recorded Akihabara Electric City the night of the launch as well as the Ultraseven appearance.

    For years, even way back when I worked for BillG, I had been going to Yodobashi Camera in Akihabara (and Shinjuku) to see what Japanese consumers were buying and to buy assorted USB and power cables that are exactly the length I need. The size of the Yodobashi flagship is unimaginable. The evening before the event we got an incredibly cool behind-the-scenes tour of the store, getting a look at the entire operation at night as they prepared the signage that would blanket the store for our event there the next day, such as the big decals that covered the walls of the 5-story escalator. As someone who grew up in the shadow of Disney World, the underground tour of Yodobashi was much like the underground of Disney, and about the same number of people visit each year I was told.

    The team put on an outdoor event at the front of the store the evening of the launch with all sorts of famous-in-Japan anime/cosplay actors and tech celebrities. And when the first copy was sold that evening, we did a press event right there at the front of the store. All along the main street of the Akihabara Electric Town, Chuo Dori, there were events in front of the many stores selling PCs and software.

    Microsoft Japan, MSKK, had come up with a crazy promotional partnership with Burger King. The chain created a seven-layer Whopper to celebrate Windows 7. It was five inches tall (13cm) with seven patties totaling 1¾ pounds (791g) of beef. It was unfathomable, even for a no-carb, protein person like me. The first 30 customers got to buy the burger for ¥777, or about $9 at the time. The launch team and I snuck over to the Burger King around the corner from Yodobashi and ordered the monster burger. None of us could eat it elegantly or even try to finish it, but we got some hilarious team photos of the attempt and general celebration.

    At a hotel ballroom, we held the main launch event for the press, featuring all the new PCs from Japan PC makers. The event featured an Ultraman theme. Why? I knew about the movies but was not a huge follower. What I learned was that Ultraseven was the third installment of Ultraman from the late 1960s. It was still wildly popular in some circles in Japan. The launch had a cast of people doing choreographed battle scenes in Ultraseven and Ultraman outfits. It was something to see. I filed this away for another Lost In Translation memory.

    MSKK hosted a both a casual user group meeting and a formal business launch as well. At the user group meeting we did demos and gave away a bags of Windows 7 logo gear among a series of demo stations at a cool Akihabara exhibition space down the street from Burger King. I wore a super cool Windows 7 windbreaker which I still have.

    The business launch was a formal ceremony highlighting the broad support of both hardware and software for the launch. Joining Microsoft was the head of Dell Computer Japan. Together with a group of MSKK employees and partners we participated in the traditional celebration kagami-biraki or cracking open a rice barrel with big wooden mallets.

    I’ve had the privilege of experiencing many product launches in Asia, but this time, for Windows 7, it was next level. MSKK is a gem of Microsoft. When I am lucky enough to be in Tokyo, even years and years later, walking around Akihabara I have the warmest and most vivid memories of the launch and my friends from MSKK. And sometimes my stomach hurts a bit thinking about the Burger King, which recently closed just before the pandemic. The news coverage of the event in Tokyo which was amplified across the important Asian markets was wonderful.

    Our confidence was high heading into reviews, which broke with availability of the product and new PCs in retail stores—we had plenty of positive reviewer experiences and no deep concerns. That’s what came from being not just in beta but running as the primary OS on reviewers’ and enthusiasts’ PCs for months. We risked a reviewer becoming somewhat bored or even cynical with the release simply because there was little new from the beta and no product drama to speak of. Hundreds of positive stories broke across print and TV. Local reporters did a lot of product reviews and buyers’ guides at that time. Waggener Edstrom worked tirelessly in the United States to feed them information and support.

    Walt Mossberg’s review evoked a positive tone that started in January with the beta release. For his RTM review he said, “Bottom line: Windows 7 is a very good, versatile operating system that should help Microsoft bury the memory of Vista and make PC users happy.” The headline read, “A Windows to Help You Forget: Microsoft's New Operating System Is Good Enough to Erase Bad Memory of Vista.” There was little more we could ask for in a review.

    Ed Baig of USA Today and one of the most widely read reviewers made it clear how positive he was on the product when he said “What you'll notice is that Windows 7 is snappier than its predecessor, more polished, and simpler to navigate. Screens are less cluttered. It has better search. Windows 7 rarely nags.…It sure seems more reliable so far.”

    Windows 7 was the first major release of Windows not to double the requirements for memory and disk space. While the box maintained the same requirements (also a first) in practice the reduction in memory usage and focus on Task Manager paid off handsomely. As much as we were proud of the business success, the engineering success of Windows 7 was among the most significant in company history and the reviews reflected this improvement in core software engineering competency. JonDe brought his engineering excellence to all of Windows.

    Major PC makers used the time from sign-off on the build to the October launch event to prepare the first Windows 7–ready PCs and get them into stores for holiday sales, including Black Friday in the United States.

    Industry analyst firm Gartner declared the “recovery of the PC market on a global level,” with preliminary numbers showing a 22.1 percent increase over the previous year. Their quarterly analysis was effusive relative to their own reports just months earlier that were doom and gloom. More than 85 million PCs were sold in the fourth quarter of 2009, up more than 10 million units from 2008. This even though we were in the midst of a global recession. One year earlier, the top line was that PC sales had crashed. By the end of the first quarter, Gartner would upward revise their forecasts for 2010 to almost 370 million units, growing nearly 20%. The primary reason was that mobile computing, including netbooks, was on fire. Gartner concluded “It was the strongest quarter-on-quarter growth rate the worldwide PC market has experienced in the last seven years.”

    It would be incorrect to assume cause and effect relative to Windows 7. There existed pent-up demand for new PCs to replace aging ones. Windows Vista had caused many, both at home and at work, to hold off buying new PCs, and the recession further slowed those decisions. Windows 7 brought many people back into the market. The shift to mobility was helping PC units, but the low cost of netbooks hurt the profits of the major PC makers.

    The competitive forces were real. Apple was doing very well in the US (and Japan) and finished the year selling 24% more units year over year. The strength in consumer sales was the headline supported by the so-called consumerization of IT, where consumers were buying their own preferred PCs to do work rather than rely on stodgy corporate PCs that were slower, heavier, and burdened with IT software.

    It felt, at least to me, that I’d been holding my breath for more than three years.

    I walked through Meiji Garden and Shrine early the morning of my flight home, a travel day tradition and one of my favorite places on earth. The world’s economy was still in shambles from the Global Financial Crisis. The PC sales everyone was excited by were obviously juiced by the start of an economic recovery and by netbooks. These were not going to last. When it came to netbooks, the major OEMs were anxious to exit the market and return to their view of normal. The problem was, as it became clear, netbooks were additive to a shrinking market. Consumers wanted portability. They were willing to try netbooks, but the product could not meet expectations.

    When NPR reviewed Windows 7 in a very positive review, even the introduction espoused the end of the operating system, saying:

    We are in the modern world now and, while Windows continues to be the default OS, everyone is talking about Mac OS X, Linux and the second coming of, wait, no, just the much-anticipated arrival of Google's Chrome OS.

    The future is the Web, not the OS, and everyone knows it.

    As the Narita Airport customs officer stamped my passport and I walked through the turnstile, I could finally exhale. I think my stomach still hurt from the attempt at the seven-layer Whopper. Everyone else was heading back from the New York launch events and, other than the coverage I read, I don’t remember if we even took the time to share stories.

    I had the same feeling I had when Office 2007 finished. As happy and proud as I was, it felt like the end of an era. With the huge shift happening in PC sales, PC makers, the internet and cloud, mobile phones, there was no denying we were in another era. When I looked at Windows 7, I did not have a view of “look what more we could do” as much as “we’ve done all we can do.”

    What I do remember more than anything was talking to members of the team in the hallways, at meetings, remote offices, or over email throughout the course of the release. No matter what was going on and how difficult things got, I will always cherish all the people who shared their feelings about doing some of the best work of their careers—thoughts I still hear even as I write this. It was incredibly rewarding to hear. That wasn’t about me, but about the system and the plans put in place by the team of leaders we assembled. The Windows team was a new team. It was so ready to take on new challenges.

    With RTM everyone on the team received their ceremonial copy of Windows 7. It is nice to have something to put on a shelf reminding each of us of the project and what we accomplished. For many, the next stop is the Microsoft Store to get upgrade copies for friends and family—another Microsoft holiday tradition.

    The growth in mobility and demand for quality played right into Apple’s strengths, though not at first glance. The market continued to pressure Apple on low prices and did not see the weakness we did when it came to netbooks. On the heels of the Windows 7 launch, Apple released several new Get a Mac commercials. Among them was the segment “Promises” which reiterated all the times at new releases of Windows when Microsoft claimed the new release would not have all the problems of the old release. This one wasn’t accurate, but that didn’t matter.

    In fact, Steve Jobs and Apple had a product in mind even more disruptive to the PC than the iPhone or MacBook Air…while we were just starting the next Windows release.

    On to 097. A Plan for a Changing World [Ch. XIV]



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • While it is incredibly fun to do a first demo of a big product as described in the previous section, there is something that tops that and even tops the actual release to manufacturing. That is providing the release, actual running code, to a product’s biggest fans. It was time to welcome everyone to Windows 7 and put the code that the team had been working on since the summer of 2007 out for the world (of techies) to experience.

    Back to 094. First Public Windows 7 Demo

    Seattle summers are notoriously difficult on product development. After a long spell of clouds and rains, the beauty and long days of Pacific Northwest summers arrive, neither are particularly conducive to coding.

    Summer wasn’t why I ended up here, but it certainly had an impact on me. On my first visit in 1989 during a dismal February, I saw the outdoor Marymoor Velodrome down the street from Microsoft and thought, this is going to be great. On TV it didn’t look as difficult to ride as I eventually learned it was. I sort of rode it exactly one time and that was the day my bicycle arrived from Massachusetts.

    But alas, product development demands don’t end even with 15 hours of daylight.

    It was going to get busy for the Windows 7 team. Our planned schedule called for the third development milestone to be complete by the end of summer 2008.

    We were making progress, but the schedule was slipping. The code was getting better every week, but the overall game of schedule chicken that often plagued a large project was an historical concern. This was our first time as a new team going through this part of a product cycle. While we had a good deal of positive progress building a team culture, Windows was notorious for groups betting against each other’s schedules and being less than forthright with their own.

    When HeikkiK was running the Office 95 ship room, he declared that everyone should be working to finish first, not simply to finish second to last. We needed to get to the end of the milestone as a team working together without looking for one group to blame, since it is never one group. JonDe and I had this same concern.

    Everyone spent the summer installing daily builds on every PC we could. At one point, I must have had eight different PCs between home and office and was installing on all of them nearly every day. Every night I was installing a new build at home while doing email and other routine tasks. Even though my home “service level agreement” called for no beta software, an exception was made for Windows 7.

    I was working at two performance extremes. I went to Fry’s Electronics and built my own “gamer” PC from the best components. I spent big bucks on a newfangled solid-state desktop drive (not common at the time), a crazy graphics card, fast memory, and the most ridiculous Intel chip available. I installed Windows 7. I was blown away by the speed (as well as the noise and wind emanating from the mini-tower). Starting Word or Excel seemed instantaneous. Boot took low single-digit seconds. It reminded me of the first time I used a hard drive on my father’s Osborne computer and how much faster it was compared to floppy disks. I used this PC when I sat at my desk at work, which wasn’t often as I was always walking around the halls.

    At the other end of the spectrum were Netbooks. To the degree I could, I had taken a fancy to the Lenovo Netbook, the IdeaPad S10, and carried it with me everywhere especially at my favorite breakfast place (Planet Java) or lunch place (Kidd Valley) where I did a lot of Windows 7 blogging. Every Netbook was close to identical on the inside, but the Lenovo had a good screen and a rugged exterior. I modified mine, replacing the spinning hard drive with a then non-economic solid-state drive to better emulate future laptops like the MacBook Air. It was my primary PC for writing blogs posts, email, spreadsheets, and browsing, and the like, which was most of what I was doing. When we finally got to the Professional Developers Conference, this was the PC I held up with a bright yellow “I’m a PC” sticker on it, stickers marketing created in a jiu-jitsu move embracing the blowback from the Apple TV commercials.

    I was constantly on the lookout for memory consumption and the number of running processes, the signs of bloat in Vista—fewer processes and less memory were better. Each process was a critical part of Windows. The number of processes had soared with Vista, and each had overhead in complexity and performance (in contrast to Linux, Windows processes were much more substantial and important to track.) Windows 7 was making impressive strides in reducing memory usage and process complexity. It served to make me feel connected to the engineering of Windows 7 and reminded me of counting bytes and seconds back in the day. I snapped screen shots of the Windows Task Manager and would bug JonDe and AlesH every couple of days.

    Each day booting into a new build and seeing the progress was a great day. Each day revealed a crisis or challenge, but as a team everything continued to move forward.

    Even though I was mostly an observer, the effort to improve performance was some of the best work of the release. It set a tone for making progress, but also for the ability of teams to work together. The conventional wisdom was that Windows Vista was inevitable and unavoidable as capabilities were added and the product grew which could not have been prevented. Windows 7 disproved that theory.

    By midsummer, we had to slip the schedule based on our progress through M2 and M3. Originally our goal was to finish M3 and have a full beta in time for the previously scheduled Professional Developers Conference, PDC, in Los Angeles. We weren’t where we needed to be, so we took about an eight-week slip. The build at the PDC would officially be pre-beta, terminology we just made up. This would be our last slip. JonDe and I were privately relieved at the degree of the slip, but frankly the team was excited to be clearly on track, relatively, for the first time in many years. Depending on your experience or the context, eight weeks can seem huge or literally nothing. It was nothing.

    SteveB sent a memo to all of Microsoft outlining some of the work to date for the whole of the fiscal year. The company had made a lot of progress on many fronts. The topic that had occupied a great deal of discussion, and was a good portion of his memo, was Google and competing with it on the consumer front and the potential relationship with Yahoo. SteveB also described the emerging cloud strategy, and the fact that more would be shared on that topic at our upcoming PDC.

    Fiscal year 2008 was quite a year for Microsoft. Revenue broke $60 billion and operating profit grew 21 percent to $22.5 billion. The numbers were incredible. Still, the concerns about the PC and catching up on consumer services dominated Wall Street’s view. This memo was one of the early communications in a strategic shift to the cloud platform and you can feel the push-pull between cloud and the traditional model in the technology descriptions. It’s important to say that it was still super early in the journey to the cloud for enterprise computing and the topic was not top of mind for customers, especially as the financial crisis began to take hold. In fact, the feeling that the cloud was architecturally inferior to private data centers was by far the most common customer belief. Their future enterprise computing model was a data center running servers using virtualization. In 2008, the idea that there would be something of a new cloud operating system was mostly a view held inside the halls of Google.

    In the memo, SteveB announced that KevinJo was leaving to be CEO of Juniper Networks, and that JonDe and I, along with Bill Veghte (BillV) leading marketing, would report directly to Steve, a reporting structure that remained in place through the release without issue. This was a standard and expeditious way to handle a managerial change at this stage of a big product. Incidentally, Satya Nadella (SatyaN) had recently moved to manage Search and ads in March 2007 and would also report to SteveB in a similar move.

    In the lead-up to the PDC we began blogging publicly about Windows 7. With the focus on tech enthusiasts, IT professionals, and the trade press, I created a blog called Engineering Windows 7, or e7. An extension of how we thought about blogging for Office 2007, the blogs were the primary first-party communication channel for the product. We authored long and detailed posts, thousands of words, about the implementation choices we were making and how we measured progress. We offered tons of data to describe real-world Windows use (often my favorites posts). I authored posts but also introduced posts that other team members wrote, each expressing the design point of view and rationale. Many generated a great deal of dialogue and discussion and became news stories themselves. There wasn’t really a Hacker News yet for Windows coverage, but the comments sections of many stories read just like Hacker News would have read. Tech enthusiasts loved to dispute the data provided then just as today.

    While to some press the blogging came across as a carefully crafted corporate message, nothing was further from the truth. We were simply blogging. The posts did not go through any corporate machinery or apparatus. They were as authentic as they could be. And the tradition worked so well that after the PDC it became a significant part of the communication of Windows going forward.

    There were two relevant industry announcements that at any other time would have caused a great deal of distraction. The PC world was entirely focused on the PC, to the exclusion of the world of mobile phones, and, to some degree, browsers were still distinct as a challenge to Windows because they still ran on Windows and had yet to incorporate much beyond rendering text and graphics. Yet both phones and browsers would have announcements that would radically alter the competitive landscape for Windows 7.

    In June 2008 at Apple’s WWDC (the World Wide Developer Conference, Apple’s version of the PDC), Apple announced the much-anticipated and predicted iPhone SDK and App Store which was teased earlier in the year. Initially, it had 500 apps, small relative to PC apps, but that number would grow at an astronomical rate. More importantly, it solved many key problems that had plagued PCs. In the announcement, which was a short note from Steve Jobs posted to Apple’s news site, he said “We’re trying to do two diametrically opposed things at once—provide an advanced and open platform to developers while at the same time protect iPhone users from viruses, malware, privacy attacks, etc.” This controversial change riled tech enthusiasts but also ushered in a new definition of computer, one that was safer and more reliable than anything a PC (or Mac) could offer. The emphasis below is mine.

    Third Party Applications on the iPhone

    Let me just say it: We want native third party applications on the iPhone, and we plan to have an SDK in developers’ hands in February [2008]. We are excited about creating a vibrant third party developer community around the iPhone and enabling hundreds of new applications for our users. With our revolutionary multi-touch interface, powerful hardware and advanced software architecture, we believe we have created the best mobile platform ever for developers.

    It will take until February to release an SDK because we’re trying to do two diametrically opposed things at once – provide an advanced and open platform to developers while at the same time protect iPhone users from viruses, malware, privacy attacks, etc. This is no easy task. Some claim that viruses and malware are not a problem on mobile phones – this is simply not true. There have been serious viruses on other mobile phones already, including some that silently spread from phone to phone over the cell network. As our phones become more powerful, these malicious programs will become more dangerous. And since the iPhone is the most advanced phone ever, it will be a highly visible target.

    Some companies are already taking action. Nokia, for example, is not allowing any applications to be loaded onto some of their newest phones unless they have a digital signature that can be traced back to a known developer. While this makes such a phone less than “totally open,” we believe it is a step in the right direction. We are working on an advanced system which will offer developers broad access to natively program the iPhone’s amazing software platform while at the same time protecting users from malicious programs.

    We think a few months of patience now will be rewarded by many years of great third party applications running on safe and reliable iPhones.

    Steve

    P.S.: The SDK will also allow developers to create applications for iPod touch. [Oct 17, 2007]

    The App Store also provided distribution and awareness to developers, a way to make money, and a way for Apple to vet apps that might be harmful to consumers. At the time the focus was on the fact that “30 percent goes to Apple.” When we saw the store, though, we knew the change would be monumental. To find software for a PC, the best someone could have hoped for was a website such as download.com. There existed varying levels of trialware, freeware, spyware, and malware. Apple had solved the software distribution problem, made sure software was reasonably safe and high quality, and given ISVs a huge new avenue for creativity and money. All on the most exciting computer around, the iPhone. That was the bad news. The good news was that the world still viewed PCs and phones as totally different things.

    The world other than Steve Jobs and Apple, as discussed in the previous section. History would later reveal through email discovery the internal conflict that surrounded opening the iPhone to developers so broadly. The success would also go far beyond even what Steve Jobs anticipated.

    All that I could think was, Time. I need more time

    That was not all I thought. I also greatly admired what Apple was accomplishing. Like many who joined the Apps division in the 1980s, Macintosh was a crucial part of my early Microsoft work and the years before that. The old Mac ToolBox APIs are forever imprinted in my memories. Other than the die-hard fans, few generally acknowledged the consistent refinement and foresight in Apple’s software design. Several of us on the team were “original” Mac third-party developers from the mid-1980s and had always admired not only the results but the patience of their process. The continuous iteration and complete execution of what they did was admirable. Apple’s business in Macs was definitely not something we worried about, but their product execution was worrisome. I looked over their R&D spend and compared it to Windows and Office. In 2008, Apple for the first time eclipsed $1 billion in R&D for the year, a big uptick from the prior year, perhaps an indication of iPhone and iCloud ramp. The full Windows 7 team was spending about the same. There’s a huge difference in R&D when it comes to having a full ecosystem but at the same time R&D for hardware is much more expensive. The main point is not only were they building breakthrough products, they were doing so remarkably efficiently compared to Microsoft.

    The Mac might have been a better product, but Windows won, and the winning product becomes the better product in the market. It was not until the iPhone and SDK came to be that the true appreciation of all they had done with so relatively little was so broadly understood.

    A more shocking announcement that hit closer to home came six weeks later. Google announced the Chrome browser with a blog post and a classically Googley online comic that accidentally dropped too soon. Chrome, ironically named due of the absence of user interface chrome, would prove to be monumentally disruptive to the browser world. Google had dramatically improved performance and security in browsing compared to IE 7 from Vista (or IE 6 that so many were running still) and current leader Firefox. They had committed to open source and brought an entirely new level of energy to the browser battle. In his blog post, Sundar Pichai (yes, then product manager for Chrome) wrote in a nod to antitrust that the “web gets better with more options and innovation. Google Chrome is another option, and we hope it contributes to making the web even better.” In some ways, we were straight back to 1996 again. This would be a huge problem for the newly reconstituted Internet Explorer team, both immediately and going forward. Within a short time, there was a massive share shift to Chrome. Much like Gmail, Google released a product, seemingly out of nowhere, into what was viewed as a stable space. And they took it over. It would be years before privacy, tracking, and all the “evil” stuff Google’s browser would come to be, but at the time, a new competitive landscape was defined for IE. If our job was difficult before, it suddenly was even more so.

    The PDC took place on October 27th at the Los Angeles Convention Center. Azure, the new name for Red Dog, was announced on the day one keynote. That proved to be both prescient and somewhat ahead of the curve for most attendees. Windows 7 was the second-day keynote and carried the bulk of the news for the show. In some ways the fact that most of the attendees did not seem to find Azure immediately useful made our jobs easier. Most attendees were still debating the proper way to pronounce Azure. The developer relations leader found the debate the night before particularly irksome as he was a Persian who had his own ideas of how to pronounce a word he claimed as native in origin. I was completely entertained by this late-night sideshow. Nevertheless, the fact that the attendees were somewhat puzzled by Azure compared to what they saw as vastly more interesting sessions on .NET, Avalon, virtualization, or the Windows 7 desktop. The disconnect was a harbinger of the disruption challenges the entrenched Microsoft would face.

    While the audience for the PDC was professional developers there to learn the latest in APIs, tools, and techniques from Microsoft, the front rows of the main hall were the all-too-familiar members of the press. Looking out from the stage, I could see all the stalwarts of Microsoft beat reporters and technology press who had been frustrated by the lack of information of Windows 7. We were doing a keynote for developers who spent a few thousand dollars to be at the show, but in reality we were putting on a show that needed to be understood by the mainstream media and conveyed through the expertise of the industry press.

    Steve Jobs had upped the stakes with his spectacular keynotes, increasing pressure across the industry to put on a good show. The normal Microsoft keynotes, the kind pioneered by BillG, were long and detailed with complex architecture slides and many graphics. These were somewhat enhanced as we moved to enterprise computing with obligatory and infamous “partner videos” featuring senior IT professionals in front of architectural diagrams or racks of equipment extolling the virtues of Microsoft’s strategy. The audience expected this type of keynote and expected us to write code on stage. By those measures, they keynote might disappoint.

    While we tried to streamline the keynote, marketing insisted on having at least one customer testimonial. This is a pretty cool demo of work by Autodesk showing off the use of touch in Windows 7. (Source: Microsoft)

    Having said that, as we planned for my first keynote leading Windows, I knew that the biggest mistake I could have made would have been to try to emulate what I was not. Most importantly, I also had to find a way to apologize for Vista without throwing the product or team under the bus. I had to find a way to be excited about Windows 7, realizing we had holiday PCs with Vista still to sell. Above all, our announcement was for a pre-beta, not even an official beta, though it was ultimately a distinction without a difference.

    For my part, I went with who I was—like Sammy Davis Jr. used to say, “I’ve gotta be me.” The slides I showed were sparse and my words carefully chosen. While not one for grand entrances, I did choose “I Can See Clearly Now” by Johnny Nash as walk-on music. I was the only thing standing between the thousands of press and attendees, and seeing Windows 7 code. We knew people were there to see a demo, not a build-up or a long story. Just get to the clicks. I used just two minutes and not even 400 words then introduced JulieLar to step on stage and start clicking. I stepped down from the podium and remained upstage opposite Julie.

    As soon as she brought up the screen on the monitor, people started taking pictures, some with their new iPhones, but most with Windows Mobile, and since they were live-blogging the event, we knew they were noting the build number that was visible at the bottom of the screen, confirming that our debate about what version of Windows we were working on would be part of the conversation. Within about a minute Julie got her first round of spontaneous applause and hoots. The demo was fantastic and every time she said “…works the way you want to,” we could feel the excitement. She demonstrated all features with both a mouse and by using touch on a monitor, including showing an on-screen keyboard with predictive text and more. The bulk of the demonstration emphasized “putting you in control” of Windows.

    Once she was finished, I stepped onto the center of the stage and got to say, on behalf of the entire team, something two years in the making.

    “Welcome to Windows 7, everyone.”

    It was the perfect demo to introduce the product.

    At some point in the keynote I needed address what everyone was waiting for which was what did Microsoft really think about Vista. While the press would no doubt take note, the credibility of what was said would rely on winning over the tech enthusiasts. More than any audience, the tech enthusiasts in the room were most disappointed in Vista and felt let down by the product. From March 2006 when I came to Windows, I promised to never be critical of what preceded me and I intended for that to be the case. It would have been so easy and so cathartic for the room to profusely apologize for Vista. It would have been equally wrong to pretend that we had not made some sort of mistake. I chose a path of subtlety and to acknowledge “feedback” in all its forms, including a few television commercials. With a slide titled “Transition from Windows Vista” I framed the work we had done since Vista released as providing context for the day’s keynote:

    As we set out to build this release of Windows, we really did have to recognize the context with which we were releasing Windows 7 and developing it. And that’s in transitioning from Windows Vista. We certainly got a lot of feedback about Windows Vista at RTM. (Laughter.) We got feedback from reviews, from the press, a few bloggers here and there. Oh, and some commercials. (Laughter.)

    As part of the session, we wanted to highlight some of the features that were specifically relevant to the developer and enthusiast crowd. I took a moment to show seven features (the number 7 was used a lot) that were chosen specifically to generate applause for the crowd including: 1. BitLocker disk encryption (previously in Vista Ultimate), 2. Mounting VHD (a virtualization feature), 3. High DPI (support for really big monitors and normal sized text), 4. Magnify (an assistive technology for low vision that is also useful for product demonstrations), 5. Remote Desktop across dual monitors (the first live dual monitor demo we ever did), 6. Taskbar Customization (anything with customization is a pleaser), 7. Action Center Customization UAC/Notifications (the improvements over Vista for enthusiasts).

    There was a clear call-to-action for developers including moving to 64-bits, using Windows touch, and more, but mostly to download and install Windows 7 pre-beta. People were doing that as soon as the lights faded.

    We wanted the keynote to be easy and approachable, not usually the norm at the PDC. That also meant we would leave out a good deal of the team’s work and new features present in the beta build. To that end we created a massive “Product Guide” for the trade press. We would also follow that up with a workshop for them to attend where they would have a chance to ask questions directly of the product leaders. The full product guide ran 119 pages! The team promised and delivered, and you can see this from the prominence of the “Engineering Focus Areas” in the guide which were taken straight from the product vision and mock press release.

    While we would not normally expect long-form reviews and deep dives in a pre-beta, Windows 7 was generating so much interest that the tech press was filing tons of stories as were individual bloggers who drilled into every aspect of change from Vista. YouTube was filled with demos created in short order. Windows 7 was the top of Techmeme, and not for messing up.

    An example of the coverage was ActiveWin, a Windows-focused outlet, that wrote over 13,500 words plus screen shots on the pre-release. Andre Da Costa wrote the piece, releasing it on October 31 as the conference ended. They dove into seemingly every detail, even including their summary of the key goals of the release:

    Key Goals:

    * Under-promise and over deliver

    * Reduce Compatibility problems and bring investments in Vista forward

    * Reduce disk foot print and memory foot print

    * Improve performance

    * Secure, predictable

    * Make the Windows and PC Experience easier

    * Exceptional hardware and software support

    * Bring future releases to market faster

    * Personalized experience that defines you

    * Superior mobility through reliable performance, power management

    ActiveWin concluded better than anything we could have written ourselves. Promise and deliver.:

    It’s safe to say I am overwhelmed, overjoyed and most of all excited about Windows 7. This is the release of Windows everybody has been waiting for, it’s what Vista was meant to be and beyond that. Windows 7 puts the user first; it’s about going back to the fundamentals of what an operating system must do. Managing and maintaining your PC is exceptionally seamless in Windows 7 and users will appreciate the tremendous improvements and advancements this update will offer on both existing and new hardware form factors in the future. Windows Vista set the foundation for a lot of what is happening in Windows 7 today. Windows 7 makes security Essential, but not aggressive like Windows Vista. The improved UAC will no doubt give consumers confidence in this feature, just the fact that you can tweak it to a certain degree is a welcome change. Businesses will appreciate the improvements to how the OS is managed and deployed while mobile users can get better experiences between their work and home environments. Home Networking has finally reached a level of ease of use that will make even the novice to make those PCs in the home talk to each other. There is still a lot of work to be done as this early glimpse shows. But Microsoft is on the right path with Windows 7, focusing on ease of use, compatibility, better ways of interacting with the PC and managing the personal data. This is an upgrade I am looking forward to and you should too.

    Posts on Windows 7 experiences across all sorts of different hardware appeared. Engadget, everyone’s favorite tech blog, tried Windows 7 on an ASUS EeePC writing “just as Microsoft demonstrated, the relatively lightweight Microsoft OS required just 485MB of RAM when Windows 7 was fully loaded, sans applications of course. Hot.” The article’s title was even great for all the work the team put into this specific metric, “Lightweight Windows 7 pre-Beta on Eee PC 1000H looks very promising.” I can personally confirm that memory usage number.

    As a manager of a giant product and team there are, honestly, few truly rewarding moments that are also deeply personal. Nearly all the time there’s worry about how the team is doing and if they are finding the joy they deserve. October 28, 2008 was one of those exceedingly rare moments for me.

    On to RTM…

    On to 096. Ultraseven (Launching Windows 7)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • In an era of huge software projects with a zillion new features in every release, there’s little more exciting than the first public demos. Such demos are also incredibly stressful to pull off. In addition to all the work to just get the code to demo-ready condition, there’s a lead-up to public disclosure, briefing reporters, and aligning partners. The first demo of Windows 7 was all those things and more, because we’d (or just I) had been so quiet for so long. This is the story of unveiling at least one small part of Windows 7 along with my own personal screw up along the way.

    Back to 093: Netbook Mania

    The second of three development milestones for Windows 7 was originally scheduled to end on March 26, 2008 which was eight months after the project start, Vision Day. We ended up finishing on May 9, which was a slip of 44 days. For any massive software project, this was fantastic. For Windows, it was doubly so.

    It was even better than that. The new organization was starting to take hold. The product was emerging. The team was executing. We were building what we committed to build, and it was working. The “daily builds” were happening and by and large the team was running Windows 7 every day.

    After two years in this role leading Windows, I finally felt like it would be OK to emerge and talk about what comes next. It is difficult to put into words the constant gnawing, sick-to-my-stomach feeling up until now wondering if we would deliver. We had definitely promised but for nearly 20 years I had seen leaders across the company say “the team is feeling good” or “we’re making good progress” or “the milestone is complete” only to see the project unravel or simply recognize it was never actually raveled.

    For months I had been under immense pressure from OEM partners, our OEM account managers, enterprise account managers, investor relations, Intel, retailers, not to mention SteveB, and many more to just articulate a ship date or some plan. Hardly a week went by without receiving a forwarded email detailing the costs of not disclosing what we were up to.

    Yet I was perhaps irrationally concerned that I would put something out there only to have to recant or adjust what was said. Many told me I was being overly cautious. Many said that it is better to open up communication and worry about having to correct it later. I just couldn’t shake the concerns. I felt Microsoft had one chance to make up for the issues with Vista.

    Many perceived the Windows team was trying to become more like Apple and close off all discussion of a product until the moment it was announced. This was not the case at all. Windows is a different product, as described previously, and to bring it to market requires a huge ecosystem of support and that invests time and money. There’s no way to surprise the market with Windows because an entire industry needs to know about it, prepare, and execute to bring new PCs, peripherals, and applications to market.

    For months, Roanne Sones (RSones) and Bernardo Caldas (BCaldas) on the Ecosystem team had been in deep technical discussions with partners about what would come next but had not yet committed to a timeframe. Any hints of a specific schedule (or business terms such as SKUs or pricing) would immediately make it back to the business side of the house and then to SteveB’s inbox. Even topics such as if there would be a 32-bit release (versus moving the ecosystem to 64-bit only) would have had broad implications for PC makers (and Intel). We had to walk a fine line between being excellent partners and creating an external news cycle that impacted partners as much as us. We knew that release dates were the most likely to be leaked, and the most damaging. Finishing a product with a giant, hovering countdown clock had dogged many past Windows releases. Yet, the partners needed time to prepare, and we were closer to finishing than starting. Windows 7 would soon be fully disclosed with the OEMs.

    When asked in any forum, we said our goal was to release Windows 7 “within three years of Vista.” We were intentionally vague as to whether that meant release to manufacturing, available for enterprise download, first PCs in the United States, or some other market. Effectively, this gave us a buffer of about three months. And yes, that was sneaky, but it was the one concession I made to disclosure. I really hated that all people cared about was a date when a product was so much more than that. I understood, but still.

    Then, in April 2008, BillG gave a speech, and inadvertently in one small part some believed he implied that Windows would finish in the following year. The press, who were there to hear about international finance at the Inter-American Development Bank meeting, ran with it and suggested Windows 7 would be ready much sooner than the previously planned three years from Vista. In fact, a year from April 2008 was sooner than our published schedule. That was not going to happen. Explaining that inaccuracy without stating the ship date was impossible.

    It wasn’t just that Bill said the next Windows would arrive “sometime in the next year or so.” He also expressed his enthusiasm in what was certainly meant to be a throwaway line but came across to a tech industry desperate for any news when he said “I’m superenthused [sic] about what it [Windows 7] will do in lots of ways.”

    We were close enough to completing the milestone that it was time to plan on officially talking to the press, who would be happy to talk off the record while also helping us to reduce the amount they would need to absorb all at once when it was time for stories to be written. In parallel the Ecosystem team began working with OEMs and ODMs on the detailed schedule and on software drops.

    Our first stop, as it had been with every product I worked on since Office 95, was Walt Mossberg at The Wall Street Journal. Our meetings had become somewhat of a routine, perhaps for both of us, though by no means easy or predictable—I usually prepared an overly large amount of data to demonstrate how people were using our products out in the wild and hoped to both inform him while pushing for some positive recognition. Sometimes, yes, I went a bit overboard on the data. Walt was staunchly independent and would never say if I was persuasive, but he was always thoughtful in his questions and comments.

    By this time, Katherine Boehret was joining Walt when he visited. She started with The Wall Street Journal out of college. By 2011 she had her own column called This Digital Solution, and also worked with Walt and Kara Swisher on the All Things D Conference (ATD). Katherine and Walt together were a formidable audience. They were both deep into products with their own unique perspective and would put up with absolutely no spin or marketing. They were advocates for their readers and strident in their desire to see PCs live up to their ease-of-use potential and played no favorites.

    This meeting, about a month after BillG’s speech, had a dual purpose. We wanted to at least try to diffuse some of what they had no doubt perceived (rightfully) as a mess with Vista without throwing Vista under the bus, while also setting the stage for Windows 7. If all went well, we might even secure time at All Things D that year for a quick Windows 7 demo at the end of an already scheduled BillG and SteveB joint interview.

    It was stressful. It was Walt. And Windows 7 was not fully formed for reviewers yet. Joining for the meeting or parts of it would be Julie Larson-Green (JulieLar) for Windows, Dean Hachamovitch (DHach) representing Internet Explorer, and Chris Jones (ChrisJo) discussing Live Services.

    Meeting in a conference room in building 99 with a half dozen demo laptops on the table, I started with our usual printouts of data, showing them an overview of Windows Vista in market. Walt’s earlier review of Vista called it “maddeningly slow even on new, well-configured computers.” Katherine’s writings had been a bit less harsh, but not by much. I had to at least try to change their minds, but neither Walt nor Katherine was impressed. I took the time to talk about the landscape of PCs being sold and what was going on with laptops and Netbooks. In reviewing the original Asus Eee PC, Mossberg concluded it was a “valiant effort, but it still has too many compromises to pry most travelers away from their larger laptops.” That led to a hot topic for all reviewers, but especially Walt who had praised the MacBook Air: When Windows would see a MacBook Air competitor? Walt, JulieLar, and I had discussed the MacBook Air at the Apple launch event months earlier.

    My lack of an answer on behalf of PC makers was not satisfactory for them, or me. As described previously, the PC makers were much more focused on inexpensive devices like Netbooks and not eager to take on Apple or the premium PC market.

    Browsers were much discussed in the late 2000s, though not the one from Microsoft. We didn’t know it at the time but in hindsight it would be fair to assume they had been or were soon to be briefed on the forthcoming Google Chrome browser that shipped in late 2008. Still, Walt and Katherine wanted to know about Internet Explorer and privacy, a hot industry topic among a few, but especially them. We were woefully behind Firefox on core browsing capability, but we had a fantastic story to share about privacy features that DHach and team had developed, including blocking “tracking cookies.” We showed them how mainstream sites, like The New York Times, were doing a poor job communicating to users how much information was being shared and with whom, but with only vague permission or even disclosure. We did not go as far as offering ad-blocking which many tech enthusiasts would have appreciated, but we did plan on releasing and showed a “Do Not Track” feature.

    During development, a series of meetings with lobbyists from the advertising industry discussing the Internet Explorer privacy features had led to veiled threats about anticompetitive behavior by Microsoft against ad-supported Google. Such hints or even threats were common from anyone connected to the Washington or government communities. This was unrelated to the Consent Decree, though there were still a couple of years left on that agreement and the oversight meetings that I routinely attended. As a result, Internet Explorer 8’s privacy features that were well received in this briefing would ultimately be scaled back due to an enormously frustrating push from the senior ranks of Microsoft’s legal department to capitulate to the lobbying groups to avoid drawing attention of regulators and to spare our own nascent advertising business from having to comply with privacy requirements. Do Not Track was essentially shelved even before we started. Today, the capability is a core part of Apple’s platform and the Microsoft Edge browser.

    Our primary goal for the meeting was to showcase Windows 7. For the first time, we offered up a full disclosure of our overall goals and schedule. We trusted Walt and Katherine as we had built a great working relationship with them over the years, but, more importantly, because of their unmatched professional integrity.

    After the requisite, but polite, reminder of the holes we had dug with Vista, we moved on to show some of the working features of Windows 7. After discussing Vista, Internet Explorer, and Live Services we moved to Windows 7 and the demonstration. JulieLar led a deep dive into our theme of “putting the end-user back in control.”

    We discussed improvements to the dreaded UAC experience. User Account Control was introduced with Vista as a mechanism to lock down a typical consumer PC and prevent software from being installed by accident. Unfortunately, the swift reaction to such a “nanny feature” was universal loathing. It became a symbol for the dislike of Vista. As it would turn out, this feature was only the first of what would become the typical smartphone experience in years to come but being first at getting between tech enthusiasts and their downloaded software also incurred their wrath. It was also the subject of one of the more biting “Get a Mac” television commercials from Apple. Shortly after Vista launch, the internet was filled with instructions to disable UAC, which we definitely did not appreciate. Julie demonstrated the improved, though still secure, experience, which was much smoother and well-designed and added options for enterprise admins and tech enthusiasts to control the feature.

    Julie’s demo succeeded in bringing together many concepts in the basic experience of launching programs and switching between running programs, and the array of distracting notifications and alerts. We were calling the collection of improvements to the Windows taskbar the new Superbar. With confidence, we compared the Superbar to the OS X dock, knowing we had solved problems that the dock had not.

    We showed them the collaboration with PC OEMs on what would be new with Windows 7 PCs. The Ecosystem team had a long list of improvements to device drivers, supported hardware, and features to make the out of box experience for new PCs better for consumers.

    And we had a surprise for them.

    A big bet in Windows 7 was to implement a touch interface across the product, with features in the desktop experience and APIs for developers, as well as device and hardware management. We had been working closely with OEMs to define standards and configurations that would bring touch to Windows 7 PCs. OEMs were excited due to an entirely new engagement from MikeAng and team to enable quality touch in new PCs. They believed this would help differentiate from the Mac. We had an even bigger vision. We wanted this for all PCs eventually.

    Months or more from broad pre-release and totally hush-hush, JulieLar demonstrated how we had moved applications from the original Surface table computer to PCs connected to desktop monitor touch panels. The Surface table PC, the original Surface, was a product developed in the Hardware division. It was not unlike an ’80s arcade table, featuring a modified version of Windows combined with custom hardware enabling a new form of multi-touch interaction. The table had found niche uses in Las Vegas, as information kiosks, and had been demonstrated by BillG at the previous year’s ATD Conference. As it related to Windows 7, there were touch APIs and the foundation of hardware support. Our main demonstration was mapping software that zoomed in and out using multitouch (like on the new iPhone) along with a virtual touch keyboard, which combined would offer up many opportunities for developers. On Windows, touch went beyond just using fingers but also included the digitizer needed for pen computing. It was the only feature BillG consistently pushed for in the release.

    While touch was a part of Windows 7 from the start, there were two reasons we chose to emphasize it as an early Windows 7 feature in this meeting. Showing touch early was counter-intuitive because it was totally new and could have easily remained secret, for an actual surprise.

    First, we wanted to garner broad OEM support for touch which was a long-lead feature for them. No OEMs were selling touch screens which meant sourcing and developing a product was a significant investment and effort. Momentum from the conference demonstration would represent a key public commitment by Microsoft.

    Second, there had long been ongoing rumors that Apple would add touch to Macintosh and with the success of iPhone this seemed more likely. Whether such rumors turned out to be true or not, the opportunity to both garner ecosystem support and get ahead of Apple while also showing off a BillG pet feature while he appeared at the conference seemed positive all around. To BillG and other pen advocates, it seemed “obvious” Macintosh would gain touch and handwriting support. Microsoft’s Tablet PC was in market for years already and had not seen a competitive entry, so the logic went.

    Neither Walt nor Katherine ever gave a thumbs-up reaction at a first showing, always reserving judgment until they used and wrote about a product themselves. Walt agreed to consider a demo of the touch features of Windows 7 at the ATD Conference a month later. They wanted to show more but we chose to keep the demo focused on what the ecosystem partners would value.

    We had a lot of work to do, but we were nervous-excited.

    With the ATD Conference pending, we were faced with a ticking clock, which meant we needed to disclose more details about Windows 7. The touch demo was too fragile and too elaborate to take on the road. We did not want to disclose details of the product without evidence, or, more importantly, a call to action for either developers or OEMs.

    Adrianna Burrows (ABurrows after joining Microsoft) was the senior vice president assigned to the Windows account at the Waggener Edstrom communications agency. Adrianna drove the agency strategy for Office and was assigned to Windows when I moved. She was an astute communication and marketing pro, had a keen ability to create the right story at the right time, and was an elite distance runner and French speaker by upbringing. While she was at the agency, she was a key part of our senior leadership team. She was also the most competitive person I had ever known and would never accept second place.

    People in communications rarely say not to talk when given an opportunity, at least that was the case in the 2000s. Reporters are going to write even without first-party commentary, and eventually whatever they write becomes more plausible than anything a company might later report. I had been quiet for too long. We were on the cusp of having a narrative created for us—one that would read something like: Windows 7 is going to be a “minor” service pack rushed out the door to fix the woes of Vista, built on a smaller kernel, MinWin, as the key technology. While that might introduce some compatibility concerns, it would enable finishing the release in early 2009.

    Adrianna proposed a long form interview with a highly regarded Microsoft beat reporter, Ina Fried of the influential CNET. Ina was a thoughtful journalist with a wide-ranging understanding of the dynamics of the industry. She was widely read and by the right people. Adrianna was able to arrange to have a full transcript of an interview published along with Ina’s story to reduce the risk of being edited. I thought that was a solid idea at that moment.

    Adrianna created the perfect opportunity for us even though I didn’t know what to say. More accurately, saying nothing was my comfort zone. While I never speak unprepared, I just did not work out answers that sounded credible for the questions I was obviously going to get asked.

    I got on the phone with Ina, Adrianna right there with me in my office with the call on speaker. For an hour I did my best Muhammad Ali rope-a-dope. I acted as though I had been forced to make the call. I gave a lot of non-answers. I’m sure Ina was confused since we had initiated the interview. Adrianna was tensing up the whole time—I could see her eyes widen with each non-answer. The more I spoke, the deeper the hole I dug. My answers got shorter and my deflection increased—all I could think of was that I didn’t want to talk yet because I was so unsure of what we would get done and when. I could not figure out why I was talking and what the call to action was for readers.

    I was trapped. I felt like we talked for the sake of talking and lost sight of the lead up to the first demo as the purpose.

    Ina’s story ran the day after the call, right before Memorial Day, as we were heading out to the ATD in Carlsbad for our first public demonstrations of Windows 7. It was 3,000 words of me saying nothing.

    The headline said it all: “In an exclusive interview, Steven Sinofsky offers up a few details on the new operating system and the rationale for why he is not saying more publicly.”

    Adrianna wanted to punch me. I had blown an opportunity. I felt bad, but the damage was far worse for the team, who were confused because the interview ended up pushing the needle back to opaque from translucent. I made a mistake and handled it wrong.

    I learned the hard way that I should have either not done the call or done it well.

    Fortunately, All Things D gave us a chance to undo the damage. Bill Gates and Steve Ballmer were to appear on stage together for one last time. The goal for Microsoft was to show an orderly turnover as Bill announced the end of the two-year transition from Chief Software Architect to non-executive Board Chair and would no longer work day-to-day at Microsoft. After questioning, they would turn the stage over to a “surprise” demo of Windows 7 from JulieLar.

    Julie and a veritable force of a dozen people had been at work hardening the Windows 7 demonstration for ATD. All had been setting up the demo since the night before.

    On stage, BillG and SteveB discussed the transition answered and questions about what would happen in a post-BillG Microsoft. Steve describes the early financial controls and conservative hiring approach Bill put in place that became the hallmark of Microsoft. There is a touching and relaxed retelling of the way Bill recruited Steve to join the company, including Steve’s recollection of “a computer on every desk and in every home.”

    Later, in a pointed question, Walt asked Steve, “Is Vista a failure? Was it a mistake?”

    “Vista is not a failure and it’s not a mistake,” SteveB said. “Are there things that we will continue to modify and improve going forward, sure. With 20/20 hindsight, would we do some things differently?” He told Walt and Kara undoubtedly, yes, but then added that Vista had sold a lot of copies. (The video below starts at this clip.)

    Walt asked if Vista had damaged the Windows brand. Bill jumped in with, “Well, there’s no product we’ve ever shipped, including Windows 95, that was 100 percent of what I wanted in the product…. We have a culture that’s very much about, ‘We need to do better.’ Vista gave us a lot of room for improvement.” The audience, and especially Walt, laughed.

    Then Windows 7 was up.

    JulieLar walked on stage and did a slick, six-plus-minute demo. It was the product that we had always envisioned, executed from an off-the-shelf laptop as well as from a desktop with a currently in-market touch monitor running Windows 7 software. It was live and that was terrifying for all of us. Notably, the code was barely working—clicking or tapping in the wrong place could have been a disaster. Still, it was a smooth demo.

    Walt and Kara were constantly reaching over Julie’s shoulders and touching the screen to see what would happen. We had agreed to the scope of the demo and that we would not venture off and show or talk about other features.

    Julie drew using a touch version of the venerable MS Paint and whisked through photo management, including “features anyone with an iPhone would be familiar with, such as two-finger zoom and slideshows.”

    At one point, Walt noticed that the taskbar (the Superbar we showed off at our HQ meeting previously) looked a bit different and asked about it. Julie replied, “You know we’re not supposed to talk about that today.”

    The mapping application from Surface Table was also shown but on Windows 7, including the live data for the Carlsbad, California, hotel we were in. The demo wrapped up with the playing of a multitouch piano application, which by coincidence was like one making the rounds on jailbroken iPhones. There was still no app store yet, but the technically savvy crowd figured out how to use the released developer tools to build apps and sneak them on to an iPhone.

    Our demo was a success. Phew. Windows 7 was out there, at least in words, pictures, and videos.

    The next step was getting pre-release code into the hands of developers.

    On to 095. Welcome to Windows 7, Everyone



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • The Windows team was plugging away on Windows 7. The outside world was still mired in the Vista doldrums. Then in the summer of 2007 there was a wakeup call in the announcement and shipment of a new type of computer from upstart Asus, called a Netbook, a tiny laptop running Linux and a new chip from Intel. Would that combination prove to be a competitive threat or a huge opportunity for a PC world fresh off the launch of the iPhone?

    Back to 092. Platform Disruption…While Building Windows 7

    When a project like Longhorn drags on, the business is going to miss important trends. The biggest trend in computing in 2005-06 was expanding the PC to the rest of the world, something Microsoft and others called “the next billion” as the existing computing model reached approximately one billion of the world’s 6.5 billion people.

    To outfit the next billion, many believed a new type of computer was needed. They were right. Many places where we would have liked to bring computers to the next billion lacked reliable electricity, air conditioning or heating, constant high-speed internet connectivity, and often had dusty environments as in Africa and much of Asia where I happened to have some of experience.

    At the MIT Media Lab, Nicholas Negroponte, the lab’s founder, spearheaded a project called One Laptop Per Child, OLPC, that launched at the Davos forum in 2005. The rest of the world would come to know this as the “$100 laptop” at a time when most laptops cost about $1000 or more.

    The price of $100 seemed absurd given that was less than an Intel processor and only marginally more than Windows, and that was before the rest of the hardware. Therefore, the initial designs of the OLPC would ship without commercial software from Microsoft or hardware from Intel. Instead, partners from anyone but Wintel lined up to help figure out how to build the OLPC. Almost immediately, Microsoft and Intel were blocked out of the next billion.

    The resulting device, a product of some extremely fancy, perhaps fanciful, industrial design was a key part of drawing attention to what became known as the OLPC XO-1. The device featured several technical features that were aimed at solving computing needs for children in remote areas while also addressing the goal of ultra-low cost. The software was open source with a great deal of influence from historic MIT projects aimed at learning computing and programming. The effort even created a non-profit where people could go to a web site and donate the price of a device to have one distributed. The OLPC XO-1 was so cool looking that many people wanted one for their own use, right here in the US. The rollout and communications were exactly what you’d expect from the Media Lab—exciting and broadly picked up.

    Microsoft through the Microsoft Research team and Craig Mundie (CraigMu) leading advanced technology spent a great deal of effort attempting to insert Windows into this effort. The company made progress but at the expense of causing some well-known members of the OLPC project to resign once it became clear that proprietary software was involved. Microsoft for its part would embark on the creation of a version of Windows that was ultra-low cost and stripped of many features as well, called Windows Starter Edition.

    If there’s a theme in this work, both from Microsoft and MIT, it is that the core idea was that to bring computing to the next billion, products would need to cost less out of necessity and therefore they would need to do less and be less powerful. Often in the process of doing this the products would also go through transformation to make them easier to use, because apparently that is something required too. This is a fundamental mistake made time and again when addressing what the financial and economic world call emerging markets. Individuals in emerging markets do not want cheap, under-powered, or worse products. They certainly do not need products that are dumbed down. In technology, there is really no reason for that as products do get less expensive over time. Yet in the immediate term companies get stuck with their existing price structures and economics. And people in emerging markets are not less smart, they simply do not have access to the money for expensive products. I saw this dozens of times visiting the internet cafes in the most rural and economically disadvantaged parts of the world where students had no problems at all using a PC connected to the internet.

    Microsoft would really get stuck in China where the limiting factor wasn’t hardware. People were buying huge numbers of PC components and simply assembling their own desktops that were on par with anything available from Dell or HP. They weren’t running Linux or Open Office, they were just pirating Windows and Office. The Windows team (I was still in Office) created a whole group to strip down Windows XP and add a shell to make it “easier” for emerging markets, again a dumbed down product. These changes to software were as much as way to make products favorable to the new markets as they were to make the product unfavorable to existing customers. Eventually, Microsoft came up with a plan to offer Windows plus Office to emerging markets, and China, governments for a very low price, so long as the computers were purchased by the government. At $3 per license this sounds like an incredible deal, but in all honesty was not that different from prices in many developed markets.

    Still, the idea that the next billion required much lower priced computers, and somehow the rest of the world did not, would not go away. The need to serve this market drove the next wave of innovation from Microsoft and Intel, much more so than serving existing markets.

    As Intel was mostly left out of the OLPC project, at the Intel Developer Forum in Fall 2007 they announced a new line of microprocessors with at least some emphasis on making lower cost computers for an expanded market. Intel demonstrated this processor in what is called a reference design, a PC made by Intel as a way to influence their customers to build similar PCs. The Classmate PC was a pretty cool looking laptop somewhat influenced by the Apple iBook, which brought the rounded edges, colors, and translucency of the iMac to a laptop form factor. Some would say the iBook itself owed its design lineage to the Apple eMate, based on the Newton, and sold to education markets in the 1990s. As a reference design, the PC shown could support a variety of screen sizes, storage capacities, and more, so long as it ran a new Intel low-power chip.

    Also showing off the new chip in the 2007 Intel Developer Forum was the Asus Eee PC—that’s a lot of e’s, which stood for “Easy to learn, Easy to work, Easy to play” according to the box. The Eee PC was the first Netbook. It was also one of the smallest computers on the market. While there were many previous attempts at super small computers, such as the Sony PictureBook and Toshiba Libretto, there were years earlier and premium priced. This form factor and price were unique at the time. The first Eee PC 701 had the most minimal hardware specifications in any PC around and ran customized Linux and OpenOffice. It also had several games and entertainment apps.

    The laptop was physically tiny as was the keyboard at 8.9 x 6.5 x 1.4 inches and under 2 pounds. It had a 7-inch display running at 800x480 pixels. For storage the Eee PC had only 4 GB of solid-state disk (like an iPod) and just 512MB of RAM. The price was $399 when it finally made it to the US, though the initial reports were it would cost $189-$299. It is worth noting that the same Intel chip was also available at retail stores in a laptop with a 14” screen, 80GB drive, a DVD drive, and 2GB RAM for about the same price. This reality would not distract from the cool factor or that it fit in any messenger-style bag.

    The software load was kind of a mess, at least I thought so. The hardware, however, caught the attention of tech enthusiasts who were quick to turn the Eee PC into a tuner platform, looking to modify and replace components. Soon, modders were replacing the storage or adding more memory. There were web sites popping up devoted to modding the Eee PC.

    The device sold quite well through the holiday of 2007 and that got our attention. So did the fact that modders were doing their own work to strip down Windows XP (from 2001) and squeeze it on to a 4GB system. One such modder was Asus itself which came to us wanting to officially modify Windows XP. There were three problems. First, they wanted Windows XP to cost the same as their Linux, which was $0. Second, they wanted to remove a bunch of XP just to make room on disk. What they wanted to remove was simply anything that took up space. It was kind of a free-for-all that reminded me of what enterprise customers did to Windows 3.x and Office 4.x back in the 1990s to squeeze on to 20MB hard drives.

    The third problem was significant. Windows XP was done. We were over it. It was already seven years-old and we released Vista months ago. Vista was our bet. There was definitely no way Vista could be squeezed down, first of all. Second, Vista just went through the Vista Capable mess where the basic version of Vista became tainted in market and these chipsets were good enough only for Vista Basic.

    If we began selling XP again for Asus, then we would have to offer it for every Netbook. And if we offered it for Netbooks then what would stop OEMs and ultimately customers demanding it for every PC. Suddenly it was looking like a roll back of Vista, especially if we participated. We didn’t really have a choice. Either we would lose the sockets to Linux, the modders would continue to pirate XP and hundreds of thousands of computers would be running a Frankenstein build of XP which already had tons of security problems, or we could suck it up and let the OEMs sell XP.

    It turns out Intel was in a bit of the same situation. They were starting to worry that OEMs would want to make many more laptops out of the low power chips and that would take away sales of their more powerful and profitable laptop chips. Intel defined the Netbook category, and thus availability and pricing of low power chips, to require certain maximum screen sizes and configurations. This constrained what would technically be defined as a Netbook. Microsoft used these same definitions, chief among them the tiny screen size, to offer Windows XP. This had a side-effect of extending the Windows XP lifecycle even more. When we finally celebrated the end of Windows XP it was years longer than originally planned entirely due to Netbooks, though the industry would remember this as a story of the failure of Vista in general.

    In the Spring of 2008 after what could be dubbed the Asus Eee PC holiday season, Intel announced the name of the chipset to power Netbooks, ATOM®. With that both Intel and Microsoft were all in on Netbooks, and so were all the OEMs. The collective positioning of Netbooks was as a companion to a primary computer, though that was just marketing. Intel called them MID, for mobile internet device, a third category of device that was neither a mobile phone nor a laptop but a highly portable companion computer. A lot of customers genuinely bought Netbooks as their new laptop.

    The industry was filled with concerns over margins from these devices. Intel chipsets that cost around $100 for a laptop were half that for a Netbook. At under $400 there was little room for either margin or innovation. The New York Times wrote of these concerns in 2008, “[D]espite their wariness of these slim machines, Dell and Acer, two of the biggest PC manufacturers, are not about to let the upstarts have this market to themselves." No OEM was going to be left out of what could potentially be a big shift.

    Maintaining low prices, especially around $399, posed some problems, mostly the need to cut back on other components and capacities. Intel dropped many of the required aspects of PCs, such as extensible memory, large disk drives, and DVD drives, in an effort to develop the platform that ASUS and others would follow. The PCs would be like a phone—bare bones with all the components essentially built in with little official extensibility. They would have 10-inch or less screens, presumably because of the category but in practice because small screens were super cheap, and, importantly, Intel did not want to see PCs that might compete with profitable, mainstream 13-inch laptops.

    The typical specifications of a Netbook were an ATOM processor, 1 or 2GB memory, Wi-Fi, USB ports, SD card reader, a web camera, and 2GB to 4GB of solid-state storage, as opposed to a spinning disk drive. The screen would be an inexpensive LCD running at 1024x600 resolution, with graphics bumping up against the low-end Vista Capable designation. Windows laptops had not yet incorporated solid state storage at all, which made Netbooks rather novel for techies.

    For those keeping track, every single one of those specifications was less than the minimum system specifications for the lowest-end Vista PC. Every. Single. One.

    That was the rub. These were low-end PCs in every sense. Some of the specs were borderline awful, most notably the screen resolution of 1024x600 was at the limits of what Windows XP would correctly display. Many interactions with Windows would be tricky with so few pixels and much of the Internet and Windows applications would really struggle. By struggle, it was not uncommon to get into a situation where the button that needed to be clicked was off screen and simply unavailable having run out of display area. At times the display just didn’t show enough rows, or the text was too small to read or edit.

    HP, Lenovo, Asus, and others released a flurry of devices all with nearly identify specifications. Even though the devices were identical at the processor and chipset and even power adapter, they differed in keyboards, display, and quality of storage used. These small differences were the full-employment act for tech enthusiast web sites that tracked every development in this hot new category.

    Personally, I was really into my Netbook. I was already wedded to the 11” form factor having made a switch to use only super portable PCs anyway. The Netbook was tiny, but the low-end nature of the hardware became an ever-present reminder of the need to make Windows 7 work on much less hardware than Vista did. I ran on a Netbook full time for most of the Windows 7 development cycle. The photo in the previous section of me holding up a Lenovo IdeaPad S10 with “I’m a PC” sticker was my very own Lenovo which I even modded myself to 4GB of RAM and a faster solid-state drive. I managed to blog a few hundred thousand words on the tiny keyboard as well. Where it really came in handy was how I constantly used benchmarks and slow operations to annoy the engineering managers. I loved it, but I was a willing subject.

    Behind the scenes thought something else was going on. The reality was that the root of the Netbook was not just OLPC and the next billion PC users, rather it was both Intel and Microsoft or Wintel’s inability to transition to a mobile world. The iPhone that released just before the availability of the Eee PC was built with an ARM chip, which technically is a system-on-a-chip, or SoC. ARM chips were what powered the new generation of devices from portable music players to mobile phones to the new iPhone. A SoC packages much of the whole board of a Netbook into a single chip that specifically includes at least the microprocessor and graphics, small and energy-efficient package. The Intel ATOM line was not quite a SoC, though over time it would evolve to be one, at least in name. What made it possible to run Linux and Windows was the combination of compatibility with Intel instructions and the support for PC-style peripherals such as graphics and storage. Microsoft’s version of Windows for the ARM SoC was Windows Mobile, built on the aged Windows code base.

    Intel’s entry, or lack thereof, into mobile computing and building a system-on-a-chip components to power mobile phones contributed to the origin story for ATOM. Intel had substantially invested in an attempt to bring the Intel x86 architecture (or IA, as they called it) to mobile phones. Famously, though, Intel ended up not collaborating with Apple on the iPhone, as CEO Paul Otellini felt Apple’s required price was too low and by some accounts desire for control too high. While the initial foray for ATOM was aimed at OLPC, many would claim that Netbooks were simply an effort to make something out of the failed investment for mobile. Since they were fully compatible at the instruction set level with other chipsets, an idea was to build a new laptop—essentially to rescue their failed entry into mobile chipsets. Since these chips drew less battery power, were smaller in size, and less expensive, Intel decided to suggest OEMs create small and inexpensive PCs with them.

    In essence, the Netbook arose out of a desire to make use of a low-end chip that originally was meant to compete with ARM and to be used for mobile. While it was compatible with Windows, it was by most accounts inadequate for modern Windows, especially when it came to graphics. The small screen size, while convenient as a demarcation for the Intel chip product line and Windows XP license, was also a necessity because the graphics chipset could not drive a much larger screen.

    Nevertheless, Netbooks were flying off the shelves. They were the talk of the industry. Over the next six to 12 months, the sales of these low-end PCs skyrocketed to 40 million units a year—more than 15 percent of all PCs, which was exactly what OEMs loved, especially ASUS that made a big bet. Unfortunately for all involved, the profits were slim across the ecosystem, and worse, the exuberance was truly cannibalizing laptop sales, though not ASUS, which had the new, hot product and a much smaller laptop business. When I visited southeast Asia or Africa, for example, internet cafés had a half dozen netbooks where a single PC once would have been—quantity of endpoints over quality of experience.

    Over the course of 2008 leading up to mid-2009, Netbooks remained great sellers. Even if a Netbook was sold with Linux, the demand for Windows was such that pirated Windows XP that was hacked to fit on the available storage became the standard. That seemed to benefit Microsoft in the long run, I suppose. These PCs were seemingly falling from the sky in massive numbers and while the business side was worried about the pricing of Windows and piracy, the product side was happy with something new, finally. There were review websites entirely devoted to the Netbook craze, chronicling the latest developments. In a sense, what was not to love about low-priced computers if people were reasonably happy?

    There was little Microsoft could (or should?) do to thwart the momentum, especially because we did not want to lose to Linux. We would have loved nothing more than the likes of HP or Dell to work on Sony VAIO-style PCs that competed with Apple, but that was not to be for a few more years. The PC ecosystem was once again proving that a race to the bottom on pricing and experience was what drove it.

    Over time Netbooks expanded in system specifications, OEMs constantly bumping up against the constraints both Intel and Microsoft put in place on what was a legit Netbook. Some added slightly larger screens, making them even slower because extra stress on the under-powered graphics chips. Some added full-size spinning disk drives. Some were able to be upgraded to 4GB of memory. The problem was that under the hood these were still ATOM chips. They were not good PCs. Were they convenient, lightweight, and portable? Yes, but they were slow and had poor graphics. YouTube videos skipped frames and were jittery. Web sites that used Adobe Flash were mostly unusable. Games were too slow. Using Office was marginal at best. Even battery life was limited to three or four hours.

    Netbooks, however, played an indisputably positive role in developing Windows 7. They institutionalized a low-end specification that was in-market and broadly deployed. We had to make Windows 7 work reasonably well on them.

    Peak Netbook could perhaps have been best described by questions about when Apple would make a Netbook, which Intel would have loved. Such an introduction would have legitimized the category. The mainstream business press just assumed Apple would enter the category because it needed growth, and it was missing out on the hottest new category selling tens of millions of units.

    Steve Jobs’s answer to Netbooks, in a uniquely Apple way, was the MacBook Air, which was announced in late January 2008 just after the Eee PC holiday season. Apple’s answer to a $400 Netbook was a $1,799 premium Mac. Classic Apple. The “world’s thinnest notebook,” said Steve Jobs, and Walt Mossberg said it was “impossible to convey in words just how pleasing and surprising this computer feels in the hand.”

    It simply wasn’t in Apple’s thinking to release a sub-optimal product. Even the low-priced point products from Apple are fully capable with little compromise. Netbooks violated that core tenet of Apple and so there was no legitimate reason to expect anything other than for Apple to completely ignore this new category the way the industry defined it.

    In April 2009, Tim Cook even took to their earnings call to trash talk Netbooks:

    When I look at netbooks, I see cramped keyboards, terrible software, junky hardware, very small screens. It’s just not a good consumer experience and not something we would put the Mac brand on. It’s a segment we would not choose to play in. If we find a way to deliver an innovative product that really makes a contribution, we’ll do that. We have some interesting ideas.

    At least through 2009, the MacBook Air remained Apple’s answer to small, portable, and even low-priced computing.

    There were, however, enough criticisms of the initial MacBook Air to allow PC makers to look the other way or actively market against it by pointing out the lack of ports and extensibility. Instead, the race to the bottom with Netbooks continued. OEMs and Intel would stick with this approach for several years, while Apple refined its new approach to making laptops, eventually even bringing the price down on the MacBook Air. The result was rather crushing. PC elites quickly started running Windows on MacBook Air hardware simply because the hardware was so good, and Apple even provided instructions for how to do that. I can’t even count the number of meetings with OEMs, email threads around Microsoft, and queries from the press I received, asking, “When will there be Windows PCs like the MacBook Air?”

    The ecosystem stuck its collective head in the sand all while we rode Netbooks up and then, in a short time, straight back down. In hindsight, the Netbook runup hid the secular decline in PCs that had begun with the recession following the Global Financial Crisis. It would take years to recognize that, and a massive effort for the ecosystem, and Microsoft, to respond to the ever-improving MacBook Air.

    Understanding and somehow addressing the relentless force driving the PC ecosystem to produce what most tech enthusiasts see as lesser quality devices (especially relative to Apple) remained one of our key challenges. Too many saw the rise of Netbooks as the answer to products from Apple while not being able or willing to respond directly. There was a fundamental difference between the volume platform appealing to the “masses” and the premium platform appealing to a smaller and more “well-off” segment of the market. That mental model held until the iPhone and MacBook Air. We had reached a tipping point.

    With the iPhone, App Store, and MacBook Air Apple was not simply on a roll but the biggest roll of all time from 2006 through 2008.

    At the same time, we were deep into building Windows 7 and starting to feel good about the progress. It was time to get out and share our optimism.

    On to 094. First Public Windows 7 Demo



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Welcome to Chapter XIII! In this chapter we build Windows 7 and bring it to market. We start with all the forces that were shaping up to “disrupt” Microsoft (in the now classic sense) including the launch of the iPhone, cloud computing, consumer internet services, and even the perception of bloat (in Windows this time.) Each of these on their own would be significant, but they were happening all at once, while we were rehabilitating the team, hoping to ship on time for once. To add to the chaos of the moment, these forces appeared during the largest runup of PC sales, breaking 300 million units, followed by the biggest risk to PC sales growth driven by the Global Financial Crisis. A lot was going on competitively, setting the context in which Windows 7 would be built and launched. I thought about competition a great deal, so there is a great deal in this section.

    Back to 091. Cleaning Up Longhorn and Vista

    The days of competing head-to-head with our own past releases or with vaguely similar products were over. Windows faced outright substitutes, and they seemed to be working.

    The Windows 7 team was progressing through engineering milestones M1, M2, and M3 with the energy and momentum increasing along the way, all while computing underwent radical changes at the hardware, platform, and user-experience layers. Everything appeared to be changing at once, just as we were getting our mojo back.

    Microsoft’s products and strategy were being disrupted. We just hadn’t, or perhaps couldn’t, come to grips with the reality.

    These disruptive forces appeared over the course of developing Windows 7, each one taking a toll on Microsoft’s platform value proposition. Each contributing to a small but growing chorus of changing times that started with the iPhone.

    The iPhone was announced in January 2007, six months before Windows 7 Vision Day. The phone didn’t yet have an app store, an app SDK, didn’t run a full desktop browser, lacked push email for Exchange like a Blackberry, and even omitted basic copy and paste. Nobody at the time thought this phone was relevant in a competitive sense to personal computers. Heck, it even required a personal computer for some tasks.

    Nobody except Steve Jobs.

    We didn’t know the extent the competitive dynamic would shift a year later, creating a true and unforeseen competitive situation, an existential threat, for all of Microsoft.

    I attended the iPhone launch event (rushing to it as it was the same week as CES that year) and walked away with a lot to think about for sure. It was easily one of the most spectacular launch events in the history of computing in my lifetime (after Windows 95 and the 1984 Mac launch and its toaster-size computer with a bitmap display that talked). Steve Jobs said one thing that proved to be incredibly important, with long-term implications overlooked by many [emphasis added, my transcription]:

    We have been very lucky to have brought a few revolutionary user interfaces to the market in our time. First was the mouse. The second was the click wheel. And now, we’re going to bring multitouch to the market. And each of these revolutionary user interfaces has made possible a revolutionary product—the Mac, the iPod, and now the iPhone. A revolutionary user interface.

    We’re going to build on top of that with software. Now, software on mobile phones is like baby software. It’s not so powerful, and today we are going to show you a software breakthrough. Software that’s at least five years ahead of what is on any other phone. Now, how do we do this? Well, we start with a strong foundation.

    iPhone runs OS X. [applause]

    Now, why, why would we want to run such a sophisticated operating system on a mobile device? Well, because it’s got everything we need. It’s got multitasking. It’s got the best networking. It already knows how to power manage. We’ve been doing this on mobile computers for years. It’s got awesome security. And the right apps.

    It’s got everything from Cocoa and graphics and it’s got core animation built in and it’s got audio and video that OS X is famous for. It’s got all the stuff we want. And it’s built right into iPhone. And that has let us create desktop-class applications and networking, right.

    Not the crippled stuff that you find on most phones.

    This is real, desktop-class applications.

    Most reviews mentioned it, but it did not take up nearly as much airtime as the touch screen. In fact, the absence of support for Adobe Flash in the iPhone browser seemed to even undermine this important fact for most. This important fact was the technology underlying the iPhone—the use of the full operating system was a massively strategic, risky, and difficult choice. Using OS X enabled Apple to gradually enable many Mac features over iterative development cycles, knowing that the code already worked. Apple could do this because it had bigger ideas for how it would break compatibility with the Mac and a bold new model for supporting developers to build third-party software. From the very start, the iPhone was destined to be a complete PC, only rebuilt bit by bit with a modern architecture and API. Not only did the iPhone bet on ever-improving mobile bandwidth as many criticized at the time, but it assumed mobile processors and storage would at least reach parity with the personal computer. In other words, from the very start the iPhone had a truck engine. (This reference will make sense in Chapter XIV.)

    Windows had been taking the opposite approach which was to base the mobile platform on a nearly decades-old version of Windows, stripped down even, with thoughts though not goals of perhaps of catching up to the current desktop Windows by adding new code over time. The incredible challenges this decision introduced will become readily apparent, but only with the release of a new Windows phone operating system to compete with the iPhone. The diverging paths of Windows for mobile and laptop/desktop had been cast years earlier.

    That summer, I lined up at the AT&T store at Pacific Place in downtown Seattle and picked up my black 4GB iPhone. Who needed 8GB on a phone? Some PCs were shipping with 4GB of storage and all of Windows XP. At the time of the launch announcement, I was quite skeptical of the touch keyboard and said so in an internal mail thread, pointing out if touch screens would work then Windows Phone had already tried and mostly failed. I had been a hardcore Blackberry (and later Palm Trēo) user since before general availability in the late 1990s. I was as much a CrackBerry addict as anyone. Some of the many Windows phones had a stylus like a Palm (or Apple Newton), but I never warmed to those and thought handwriting with a stylus was as dumb as Steve Jobs said at the launch. Within a few hours of having the phone (and a browser!), my worldview changed, especially about touch and especially about the evolution of operating systems. Even lacking copy and paste and relying on the slow AT&T 2G mobile network, you had to really try hard to not be impressed.

    I remember emailing the co-founder of Blackberry and asking when there would be a full browser and in a long back and forth thread, he tried to convince me of the implications on battery life, lack of capacity on the phone network, and even the lack of utility. While months earlier I might have been sympathetic, I was now staunchly on the other side of this debate.

    Inside the company, the iPhone went largely unnoticed outside of small pockets of people, and of course the phone team. Not because it was not a breakthrough product, but because it did not fully connect to our corporate mail service, Microsoft Exchange, as it was configured and permissioned by Microsoft’s IT group. Only those of us running on the testing servers, dogfood, were able to use an iPhone for email, and even then it had no support for our much-loved rights-managed email and Microsoft directory. There was a significant debate over whether maintaining this capability was good for self-hosting and competitive knowledge or bad for supporting competition. It was also about this time that support for using Blackberry was disabled. I put up a huge battle over this only to delay the inevitable. Making it difficult to fully host on competitive products was short-sighted but impossible to stop, even as a senior executive. This was done to “eat our own dogfood” even if the result meant truly innovative and competitive products would only receive cursory use.

    While SteveB’s comments around the iPhone launch have become something of an historically bad take as he somewhat mocked the high price, it is crucially important to understand that he as much reflected as simply shared the collective viewpoint of the entire PC industry, and most of the mobile industry. Collectively, nearly every party underestimated the ability for Apple, with no experience in the mobile industry, to deliver a hit product while also resetting the norms of the mobile carrier business model—the same carriers that Steve Jobs described as “orifices” during an interview two years before the world knew about the phone.

    The two fundamental assumptions of the PC industry that guided it to nearly 300 million units a year were low prices with consumer choice and a horizontal layering of hardware and software. This business model and technology architecture together enabled the PC. For all the struggles between OEMs, Intel, Microsoft and more, the success was indisputable, especially relative to the Apple model of premium price, limited choice, and a vertically integrated, single supplier.

    Before the iPhone, the mobile phone seemed to be coming around to the PC model, with the Windows phone appearing to make progress against Nokia and Blackberry with many PC OEMs using a Windows phone software license to offer low price and choice to customers. Jobs would take this good news and make it appear bad by showing how Windows Phone was fragmented across phone makers, not a single OS. That drove the Windows Phone team crazy, particularly when Jobs presented the market share as a pie chart in 2008 basically rendering Windows Phone as “Other.” The iPhone would literally trounce the traditional OEM model. Choosing between a bunch of crappy devices is no choice at all, even if they all run a single platform. Jobs had long been making this point about PCs too, except now this all seemed to be working.

    He was only partially right because in just a short time, Google would repeat the technical and partnership aspects of the Windows model exactly for phones, while upending the economic model and not charging for a software license, opting to further its advertising business model. Of note, the successful OEMs for Android turned out to be an entirely new set of partners, with none of the existing PC OEMs making the generational transition, perhaps in part because most had tried and failed when using Windows mobile software. The combination of iPhone and Android would leave Microsoft in a hopeless middle, with a shrinking partner ecosystem across OEMs and developers. The iPhone was not a broad or instant hit as is well-documented. History does not easily recall the bumpy first year because the following years went so spectacularly well.

    Seemingly, at least for the moment, unrelated, there was that money-losing online retailer named Amazon.com led by Jeff Bezos plugging away on what Steve Jobs might have called a hobby or a side project. The company created Amazon Web Services, AWS, in 2002, though few noticed. The product was relaunched about a year before the iPhone with a new set of capabilities and two main new products: EC2 and S3, providing scalable on-demand computing and huge cloud-based storage. For developers, that marked the birth of the cloud era. Amazingly, perhaps, this new competitive front was not Google, which is where nearly all of our platform angst resided.

    For Microsoft, this was all starting to sink in. We had competition on the client and at the infrastructure layer. We were getting squeezed on both of our platform businesses, Windows desktops and Windows servers.

    Ray Ozzie (ROzzie) was leading a new project, code name Red Dog, with an incredible group of early Windows NT developers including DaveC. As chief software architect, ROzzie was looking to the cloud to reinvent the future of Windows Server, Microsoft’s dominant and extremely profitable business of running enterprise computing in customer-owned and -operated data centers. The running of important business functions in the cloud was still more than a decade away for Microsoft’s enterprise customers, but Amazon’s clarity of vision and the start-ups that gravitated to the approach were incredible. With the cloud, along came Linux as well. Microsoft already needed to catch up, but you had to look very carefully to believe that to be the case, as it was not at all obvious from a customer perspective.

    Microsoft had invested a decade in building out Windows Server, a hardware ecosystem, databases, and the entire .NET stack along with a trained sales force. Suddenly AWS running on Linux with entirely new models of compute was the competitor. No salespeople required. To use AWS, developers just signed up online using a credit card and paid for what computing resources they needed as they needed. No hardware to buy. No servers to set up, maintain, and secure. No running out of disk space or network bandwidth. No need to buy more servers in case there’s a rush on the web site. A whole business could run off from laptop with a browser connecting to Amazon’s data centers in the cloud.

    It was nearly impossible for many born of the Windows server world, whether at Microsoft or our customers, to fathom this approach. There were endless debates inside the halls and boardroom of Microsoft over whether this was crazy or fantasy. One of the most common debates centered around storing data and either how much bandwidth would be required to move data to or from cloud servers or how expensive cloud storage would forever be. To me these debates seemed rather like the debates over the browser taking over for Office—it is not a matter of if, but rather when everything aligned to make it possible and how different the resulting solutions would look from what we built today. The very existence of gmail seemed to demonstrate the present reality with the most mission critical of all storage intensive workloads, email.

    We got so tangled up on how customers would migrate to the cloud, we did not really consider how much larger the market would be starting from nothing and growing cloud native. The irony of us failing to predict this same type of massive upside was almost too much, in hindsight, since that is exactly what gummed up the mainframe computing world and even the character-based (non-GUI) world of MS-DOS as the PC, Windows, and Office platforms took hold.

    Additionally, when faced with disruption while substantially new ideas are fast pulling the market and customers in new directions, the previous generation of innovation does not stop. That is what makes it even more difficult for the new approaches to take hold with incumbents. At every step, we thought we would keep adding features in the direction we were going to keep winning. We were winning big. Microsoft’s revenue growth in 2007 and 2008 was 15% and 18% respectively, with 2008 revenue growing past $60 billion.

    One example of this competitive pull was our focus on the biggest competitor to Windows Server, VMware. Why? VMware used virtualization to manage Windows Server and in doing so commoditized Windows as one of many alternative operating systems VMware could manage. It had started off as a brilliant invention for developers to isolate code under development on the desktop growing into a new components of enterprise infrastructure. We had been using it ourselves to simplify testing of Windows and even Office for several years already. If Windows Server could enable competitive virtualization, we could thwart the competition from VMware while also solving for the same scenarios Amazon seemed to be addressing with AWS but without Linux.

    VMware was acquired (in a complex series of transactions) by a Windows partner, EMC, which soon after acquired a separate company started by Paul Maritz (PaulMa), the former leader of Windows. Paul transitioned to lead VMware as CEO, where he implemented the same enterprise playbook that helped to make Windows Server a success. VMware was rapidly becoming the enterprise standard for a new wave of enterprise datacenter management, which would turn out to be a stop along the way to a future of cloud. This mattered because it impacted the COSD team’s contribution to Windows Server and put the team in between two different versions of the future, being implemented by two different teams at Microsoft, each with serious competition—Red Dog building out one roadmap and Windows Server another. While one view might be that this was a prudent strategy, another view was that it was a strategy guaranteed to slow our progress to an inevitable future.

    Our remaining competitive challenge was faced by Live Services. Competition was constant and coming from all directions. Many competitors came and went quickly, as was the norm in consumer, ad-supported services. Switching costs were low and whole populations changed quickly—it appeared to be a hit-driven business, which was not something Microsoft was geared up to navigate, especially after a failed decade of trying to make consumer and home software a thing with CDROM titles and later web sites. MSN Messenger and Hotmail had hundreds of millions of users, but daily active users (DAUs) were declining and engagement (usage time) overall was dropping. There was a good deal of advertising revenue, hundreds of millions of dollars, but it depended on intrusive display ads that riled users, even though the services were free.

    Gmail rapidly became the new de facto leader in “free” email, offering essentially “unlimited” email storage in gigabytes while Hotmail was trying to build a business charging for extra megabytes. Originally announced as an April Fool’s prank in 2004 and released in beta, by the time the Windows 7 vision meeting took place Gmail had finally removed the exclusive invitation signup requirement. Though remained in beta, the service was exploding in use. While it would be a few years until Gmail surpassed Hotmail, Hotmail almost immediately stopped seeing growth and started to see a decline in engagement. Gmail was not a gimmick; under the hood was an enormously innovative database and operational capability. Gmail had no advertisements. None.

    MSN Messenger, eventually Live Messenger, had become enormously popular around the world, especially outside the United States with hundreds of millions of active users. It too was facing an existential competitive threat. This time from Skype, a Swedish, Danish, and Estonian invention that offered free voice calls from almost any platform, notably both PC and Mac. While Messenger was often used to arbitrage SMS fees, Skype was arbitraging voice and creating a movement that would permit much of the world to skip using landlines for overseas calls when mobile minutes were incredibly expensive. Video calling was introduced as well, and while Messenger already had this capability, the cross-platform nature of Skype, as well as the focus on voice connections to local land lines, made for a much more compelling offering. Microsoft would finally acquire Skype from eBay (who had acquired it in 2005) in 2011 when it reached almost 300 million users worldwide, more than Messenger had achieved. In 2007-2009, Windows Live was still competing with Apple, Google, Yahoo, Skype, MySpace, and a host of category leaders across photos, blogging, and video. That was a lot.

    SteveB and Kevin Johnson spent a great deal of time and energy on the potential of acquiring Yahoo, the dominant leader with which MSN competed. Such a deal would have added to my challenges given their email and messenger services were suffering much the same way. We might have gained in search and content services, but we would have added productivity services that were also losing share just as Windows Live seemed to be.

    Apple struggled to find its way through the cloud and services world, even with the launch of iPhone. Apple’s decidedly client and device focused approach was quite similar to how we saw Live Services and Windows evolving together. The services would be augmented by rich “desktop class” applications for photos, video, messaging, blogging, and even productivity. Apple for years had been selling a suite of creativity products, iLife, later adding new productivity tools, notably Keynote for slide shows, called iWork. A collection of web services was originally called iTools then later rebranded as the .Mac service (pronounced “dot Mac”) and included email, online storage, and backup. In the summer of 2008, the service was rebranded MobileMe in a very bumpy launch that was not widely praised. After eight years and a good deal of iteration, Apple continued to work to find its way even as iPhone success grew.

    The most disruptive announcement from Apple came a year after the initial iPhone launch. In 2008, Apple announced and then brought to market a software development kit with APIs to build third-party apps for the iPhone. This also included an entirely new store for software distribution and economic model for developers, the App Store. It is almost impossible to overstate the leap the App Store brought to computing. The PC was drowning in viruses and malware because of the internet—the ability in a few clicks to download software seemed wonderful at first, but then quickly became a cesspool of awful software that at best simply degraded the PC experience. Additionally, the world of PC software had stagnated simply because it was so large that it became almost impossible for a new desktop product to gain awareness, distribution, and enough sales to support a pure-play software company. In fact, Skype might be the most innovative native, though cross-platform, application to break through outside of browsers, and we acquired it in 2011 before it was profitable. While some would view the App Store as a sort-of closed ecosystem, it was literally the solution to the problems plaguing the PC. The Apple bet on OS X meant that there was a robust and proven platform and toolset to serve as a foundation, plus software distribution and economics. Developers hardly had all of OS X, but they definitely had a lot of it and Apple could add more over time.

    Microsoft was steeped in a competitive mindset across every generation of leader and from many perspectives. BillG never missed an opportunity to cite some positive attribute or significant asset of a competitor. SteveB brought his relentless sense of competition from childhood math camp and sports. MikeMap and JeffH instilled this in all of us back on my first team with an intense business rigor. In working with the Windows and Windows Live role, I was faced with competition from more directions and with more depth than I ever thought possible. The team, at first as I moved into the role, seemed to consistently minimize this new reality. It would be rude to say Microsoft was in denial. I don’t think it would be unfair to say that after years of winning and even feeling like we had beaten back Linux and open source, Microsoft had become much more focused on its own universe of customers and their problems which was mostly immune to influences from outside that gravitational sphere.

    In hindsight, that was what happened when two factors combined to create a step-function change in product trajectory.

    First, the existence of a single massive product, Windows and the Intel PC ecosystem, Wintel, created constraints for those looking to build entirely innovative products. While many built products thinking of Wintel as an ingredient, amplifying the platform without posing a risk, a small percentage of risk-takers saw a different world. They saw the shortcomings of Wintel as an opportunity to reinvent. They built as though the leaders were not there and built what, in their eyes, the world should look like, whether they achieved critical mass success or not. Each of these new competitors had a worldview that revisited underlying assumptions about Windows and Windows Live and the PC ecosystem. Competitors assumed any web browser as the user interface, connected over the internet to Linux servers running open source software—no Windows Server or .NET at all. Google, Facebook, and a constellation of start-ups in Silicon Valley embraced this model as though Microsoft never existed. Even when it came to Microsoft Office, most new companies in Silicon Valley operated as though it was an insignificant part of the software landscape. In 2008, while Windows 7 was in testing as we tried to bring Internet Explorer back from hibernation, Google released the Google Chrome browser and with it putting an end to even that sliver of Microsoft as part of the next wave of innovations.

    Second, the incumbent leader had to mess up. Customers generally didn’t spontaneously change, even if there was something better to switch to, because of processes, habits, and costs, and they don’t change all at once. Leaders messed up by ignoring new technologies, especially as over time little technologies added up to something material. Also, a risk was a failure to execute and deliver new products to market, simply dropping the ball. Microsoft mired in the journey through Windows Longhorn and a executing a Windows strategy put forth in the mid-1990s had indeed dropped the ball. It was increasingly difficult to appreciate or even see changes to the technology landscape when the company’s decision-making context is so dominated by goals, challenges, and issues entirely of its own making.

    Waiting to pick up the ball was the competition born of the internet and web. While many wished to connect this potential disruption of Microsoft to the antitrust events and resulting settlement in the early 2000s, none of Apple’s iPhone, Amazon’s AWS, or Google’s Search or Gmail had anything to do with the trial and resulting settlement. Where some like to claim Microsoft was distracted, they would be wrong. If Microsoft was distracted, it was by simply trying to finish Windows Vista or compete with VMware or IBM or even Linux, or executing on our own plans and growth and dealing with issues like software security and quality. This wasn’t about the browser, the price of Windows, or even what software was installed on a PC. Those had already become old battles and irrelevant to the rapid structural changes that happened in software (the kind that produced Microsoft, then Windows, then Office in the first place). As fast as one company can rise to success, another can do the same in equally unexpected or counterintuitive ways. This was the argument or perhaps the defense Microsoft and BillG had put forth time and time again. No matter how many times BillG said this to the press, customers, or regulators none would believe him…even as it was taking place.

    This was all happening as I was trying to get our house in order and make progress. Things just weren’t as simple as they were in 2005 when OEMs were just waiting on a new release of Windows. The PC makers were looking particularly unhealthy and deeply concerned about the rise of mobile phones on top of normal concerns about the price of components, delays in Windows, and the chip roadmap from Intel—would phones be an opportunity for PC OEMs or would they prove to be a generational change in hardware leaders as well? Should they be considering Linux on the desktop as a replacement of Windows given its popularity on servers?

    SteveB, and increasingly the Board, had a lot of questions and concerns surrounding the competitive dynamic. Some board members were close to the hardware ecosystem and would oscillate between certainty that the ecosystem would deliver and that it would not. International board members were using Skype to talk to their families. Everyone wanted to know how to connect from their iPhones to their Microsoft corporate email or Windows Live personal mail, or why iTunes was so slow on Vista, or why Mac Excel was so different from Windows Excel as new Mac owners were discovering.

    In fact, they were all asking how long it would be before they did everything on mobile phones. There was also deep concern over browsers, knowing it had been five years since the last release of Internet Explorer in 2001 and work had all but stopped. All of them wanted a PC as cool as a MacBook. Microsoft board members had a budget for PC purchases and always wanted to know the most Apple-competitive Windows laptop to buy, and for much of the duration of Windows 7 we had no answers. Each time I attended a Board meeting, I had to respond to all of these questions again.

    Like Captain Kirk in ST:WOK (Star Trek: Wrath of Khan), I would look around and think, “Please, please…give us time…the bridge is smashed, computers inoperative.” We were rebuilding the team and trust with customers and OEMs. Windows 7 was going to pave the way for us to do big new things. There was little more we could do than get that done.

    Whether it was desperation, a lack of alternatives, or simply misplaced confidence in the team, the questions kept coming yet there were few questions about Windows 7. Many were already looking beyond Windows 7, thinking, and plotting as though delivering Windows 7 was some sort of no-brainer.

    Looking back, it was equally easy to ponder the radical idea of basically skipping Windows 7 and going straight for something to compete on these many fronts. We could theoretically catch up to these multiple competitive forces and not miss an entire cycle of innovation if everything was aimed at mobile and cloud. One big mega-strategy to build a new Windows, a new phone, and integrated cloud services.

    That would have been absurd. It was the opposite of possible. The team was still recovering from the Vista release with its portfolio of stretch goals, to put it kindly, that did not go as planned. The last thing we should have even come up with was some sort of Longhorn-redo. Frankly, what we planned for Windows 7 was kind of crazy, given the recent track record. We planned Windows 7 and all the features with the assumption the team could deliver a major update to Windows, on time. Additionally, the Server team and its customers remained not only unconvinced of the cloud but actively campaigned against it. Besides, we had RedDog. Windows 7 had to serve both of those.

    The impact and the constraints of the past had long-lasting effects.

    While the Board was anxious about the post-Vista landscape, the technical trade press, mostly made up of Microsoft watchers, remained tuned to Windows. It was the product with the most familiarity, and the Vista release was causing difficulties. But the double-edged sword of the beat reporters was that they covered what we were up to, but not so much what we should have been up to, until it was too late. It would still be a few years before most reporters made the switch to Mac, but the switch to iPhone was happening quickly bringing renewed attention to Apple laptops.

    At the same time, any little thing we did was chum in the water for the dozens of beat reporters covering Microsoft.

    In the summer of 2007 word leaked out about the impending service pack for Windows Vista previously described. It was not a horrible leak, but it scrambled our OEM partners and immediately froze the few Vista enterprise deployments in process. The field sales team was livid that they had not been briefed on the release, again arguing for more transparency from Redmond. The problem was we had just gone through five years of transparency and every constituent was annoyed, at best. I wrote a 3000-word internal blog post “Transparency and disclosure” where I tried to put forth the idea that being transparent isn’t compatible with being good partners, but we needed to aspire to translucency (excerpt follows).

    Transparent. Easily seen through or detected; obvious.

    Translucent. easily understandable; lucid.

    One topic I have been having an interesting time following has been the blogs and reports that speculate about how Windows will go from being an open or transparent product development team to being one that is “silent” or “locked down”. Much of this commentary seems to center around me personally, which is fine, but talks about how there is a Sinofsky-moratorium on disclosure. I think that means I owe it to the team to provide a view on what I do mean personally (and what I personally mean to do), of course I do so knowing that just by writing this down I run this risk of this leaking and then we’ll have a round of phone calls and PR management to do just with regards to “Sinofsky’s internal memo on disclosure”. But I thought it would be worth a try.

    Customers and partners want to know about SP1 for Vista. Actually they need to know. We want to tell them. But we want to do so when our plans and execution allow that communication to be relatively definitive. We are not there yet. So telling folks and then changing the plans causes many more challenges than readily apparent. While it might sound good on paper to be “transparent” and to give a wide open date range and a wide open list of release contents, we all know that these conversations with customers don’t end with the “we’ll ship by date and we’ve prioritized ”. Folks do want to know “did you fix this bug?” That is reasonable, but we don’t have all those answers and thus we cannot have a reasonably consistent and reliable communication…yet. We are working towards that. While there is clearly a challenge in the near term in not offering details, this challenge is much less than if we get the wrong information out there and we have to reset and unset expectations. Even among our enterprise customers, for whom this type of information is routine, we have a long history of really scrambling these most valuable customers with “information” that turned out to be “misinformation”. The difference we are trying to highlight is the difference between transparency in what we’re “thinking” and transparency in what we’re “doing”. Everyone wants to know what we’re thinking, but making it clear that those are thoughts versus “doing” is a subtlety lost on a mass audience.

    I know many folks think that this type of corporate “clamp down” on disclosure is “old school” and that in the age of corporate transparency we should be open all the time. Corporations are not really transparent. Corporations are translucent. All organizations have things that are visible and things that are not. Saying we want to be transparent overstates what we should or can do practically—we will share our plans in a thoughtful and constructive manner.

    This too leaked. It became known as the “Sinofsky omerta” (awful) and the idea of being translucent was always said snidely. We had gone nine-plus months without any substantial forward-looking discussion of Windows. The reporters covering Microsoft were restless. The leaked blog post only served to amplify any other leaks.

    A college student-blogger in Australia, Long Zheng, who had become somewhat of a canary in tracking Windows evolution, happened to catch an October 2007 college recruiting talk at the University of Illinois given by Eric Traut (EricTr), a Distinguished Engineer and one of the senior architects in COSD and a core member of the Windows NT architecture team. Eric was a key inventor of the hardware emulation technology used by Apple as it transitioned microprocessors the first time in the 1990s and then an early pioneer of virtualization at a competitor to VMware, Connectix, that Microsoft acquired in 2003.

    EricTr presented a wonderfully detailed talk on the role of virtualization in modern computing, describing the work he did along with a number of the most senior people on Windows. He described the architecture of a modern scalable OS and the evolution of Windows over time. He even showed some code.

    To that blogger, he had just seen a demo of Windows 7 or at least exactly what Windows 7 should be.

    Zheng saw the presentation on a video posted on Microsoft’s own Channel9 website and promptly wrote a blog post about the future of Windows 7, referring to the talk as a “demonstration of Windows 7.”

    It wasn’t that at all. What was so exciting, though?

    Over the years there was that constant murmur underlying the evolution of Windows and its expression of bloat—expansion of code size, decreasing performance, the requirement that PCs have more and more memory just to keep up. Everyone’s PC got slower over time because of bloat. EricTr’s demonstration inadvertently played into this narrative and provided a huge hope for improvements in the future.

    To implement virtualization, the OS kernel was being worked on to further reduce the impact it would have on memory and CPU consumption. Eric demonstrated this new “minimal kernel,” which he dubbed MinWin. This would be exactly the solution to every problem with PCs—if Windows was good but a bit bloated, then certainly a minimum Windows, or MinWin, would be what everyone would want. It would do everything Windows did but with minimal code. Who would not want that? If one set out to create an ideal branding to describe a release of Windows for every tech enthusiast, MinWin was it. As I read the blogger’s account, I thought “Oh my gosh, this is Office Lite, but for Windows”—it was a hypothetical product that did everything it needed to do, but was easier, smaller, faster, and lighter. What’s not to love? (See 076. Betting Big to Fend Off Commoditization)

    Eric’s talk was about virtualization and scaling Windows Server. It was not at all about Windows the way most people thought about it. Eric described what he was showing as follows:

    We created what we call MinWin. Now, this is an internal only, you won't see us productizing this, but you could imagine this being used as the basis for products in the future. This is the windows 7 source code base, and it's about 25 megs on disk. You compare that to the four gigs on disk that the full windows Vista takes up.

    Everyone on the team, most certainly a COSD architect, knew the bloggers description that this was Windows 7 was incorrect—Eric even said so. Even assuming Windows was bloated, the OS kernel wasn’t the culprit. We ended up spending a lot of time with the press figuring out how to reduce the expectations and to not consider the next Windows some sort of “new kernel.” This backfired because they heard us downplaying expectations for the release overall. With our silence, this made sense, given the Windows history of overpromise and underdeliver. As word spread, we began to hear from enterprise customers who thought that a new kernel would introduce compatibility concerns, especially in the server. This was neither a practical nor theoretical issue.

    The challenge this created was indirectly related to bloat. Rather, it created a perception that Windows 7 was going to be substantially “less bloated” (whatever that meant) than Windows Vista. That prompted people to talk about comparisons to the now, post-Vista, much-loved Windows XP. Nothing in the product plan had changed, but there was suddenly a perception Windows 7 would be dramatically improved.

    The public expectations for the release went up, as if they weren’t already sky high. When the outer reaches of the company saw these stories, particularly where there were direct connections to customers, the optimism was contagious.

    Scary? Sure. Except we already had thousands of people working on that very opportunity—making a leaner, more efficient Windows while also making it do a lot of new stuff.

    The absence of information gave people ample room to create their own worldview. Many, especially SteveB and the OEM team, wanted to use this, or one of the other rumor cycles, to get out there with more information. The team was still in the early stages of executing, and we needed to make more progress. I did not want to be out there just yet. Besides, all I could do at that moment would be counter the now exaggerated expectations which would create more confusion and likely cause another news cycle condemning Windows 7 to be a limited or incremental update to Vista.

    Just as the Fall computer selling season was gearing up, a new type of computer was all the rage in Asia and about to hit the US market. Maybe these “netbook computers” would breathe some life into Vista and buy us some time before we started talking about Windows 7?

    On to 093. Netbook Mania



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Whenever you take on a new role you hope that you can just move forward and start work on what comes next without looking back. No job transition is really like that. In my case, even though I had spent six months “transitioning” while Windows Vista went from beta to release, and then even went to Brazil to launch Windows Vista, my brain was firmly in Windows 7. I wanted to spend little, really no, time on Windows Vista. That wasn’t entirely possible because parts of our team would be producing security and bug fixes at a high rate and continuing to work with OEMs on getting Vista to market. Then, as was inevitable, I was forced to confront the ghosts of Windows Vista and even Longhorn. In particular, there was a key aspect of Windows Vista that was heavily marketed but had no product plan and there was a tail of Longhorn technologies that needed to be brought to resolution.

    Back to 090. I’m a Mac

    Early in my tenure, I received an escalation (!) to “fund” Windows Ultimate Extras. I had never funded anything before via a request to fund so this itself was new, and as for the escalation. . . I had only a vague idea what Ultimate Extras were, even though I had recently returned from the Windows launch event in Brazil where I was tasked with emphasizing them as part of the rollout. The request was deemed urgent by marketing, so I met with the team, even though in my head Vista was in the rearview mirror and I had transitioned to making sure servicing the release was on track, not finishing the product.

    The Windows Vista Ultimate SKU was the highest priced version of Windows, aimed primarily at Windows enthusiasts and hobbyists because it had all the features of Vista, those for consumers, business, and enterprise. The idea of Ultimate Extras was to “deliver additional features for Vista via downloadable updates over time.” At launch, these were explained to customers as “cutting-edge features” and “innovative services.” The tech enthusiasts who opted for Ultimate, for a bunch of features that they probably wouldn’t need as individuals, would be rewarded with these extra features over time. The idea was like the Windows 95 Plus! product, but that was an add-on product available at retail with Windows 95.

    There was a problem, though, as I would learn. There was no product plan and no development team. The Extras didn’t exist. There was an Ultimate Extras PUM, but the efforts were to be funded by using cash or somehow finding or conjuring code. This team had gotten ahead of itself. No one seemed to be aware of this and the Extras PUM didn’t seem to think this was an issue.

    As the new person, this problem terrified me. We shipped and charged for the product. To my eyes the promise, or obligation if one prefers, seemed unbounded. These were in theory our favorite customers.

    The team presented what amounted to a brainstorm of what they could potentially do. There were ideas for games, utilities, and so on. None of them sounded bad, but none of them sounded particularly Ultimate, and worse: None existed.

    We had our first crisis. Even though this was a Vista problem, once the product released everything became my problem.

    The challenge with simply finding code from somewhere, such as a vendor, licensing from a third party, or something laying around Microsoft (like from Microsoft Research), was that the journey from that code to something shippable to the tens of millions of customers running on a million different PC configurations, in 100 languages around the world, and also accessible, safe, and secure, was months of work. The more content rich the product was, in graphics or words, the longer and more difficult the process would be. I don’t know how many times this lesson needed to be learned at Microsoft but suffice it to say small teams trying to make a big difference learned it almost constantly.

    And then there was the issue of doing it well. Not much of what was brainstormed at the earliest stages of this process was overly compelling.

    With nothing particularly ultimate in the wings, we were poised for failure. It was a disaster.

    We set out to minimize the damage to the Windows reputation and preserve the software experience on PCs. Over the following months we worked to define what would meet a reasonable bar for completing the obligation, unfortunately I mean this legally as that was clearly the best we could do. It was painful, but the prospect of spinning up new product development meant there was no chance of delivering for at least another year. The press and the big Windows fans were unrelenting in reminding me of the Extras at every encounter. If Twitter was a thing back then, every tweet of mine would have had a reply “and…now do Ultimate Extras.”

    Ultimately (ha), we delivered some games, video desktops, sound schemes, and, importantly, the enterprise features of Bitlocker and language packs (the ability to run Vista in more than one language, which was a typical business feature). It was very messy. It became a symbol of a lack of a plan as well as the myth of finding and shipping code opportunistically.

    Vista continued to require more management effort on my part.

    In the spring of 2007 shortly after availability, a lawsuit was filed. The complaint involved the little stickers that read Windows Vista Capable placed on Windows XP computers that manufacturers were certifying (with Microsoft’s support) for upgrade to Windows Vista when it became available. This was meant to mitigate to some degree the fact that Vista missed the back to school and holiday selling seasons by assuring customers their new PC would run the much publicized Vista upgrade. The sticker on the PC only indicated it could run Windows Vista, not whether the PC also had the advanced graphics capabilities to support the new Vista user experience, Aero Glass, which was available only on Windows Vista Home Premium. It also got to the issue of whether supporting those features was a requirement or simply better if a customer had what was then a premium PC. The question was if this was confusing or too complex for customers to understand relative to buying a new PC that supported all the features of Vista.

    A slew of email exhibits released in 2007 and 2008 showed the chaos and tension over the issue, especially between engineering, marketing, sales, lawyers, and the OEMs. One could imagine how each party had a different view of the meaning of the words and system requirements. I sent an email diligently describing the confusion, which became an exhibit in the case along with emails from most every exec and even former President and board member Jon Shirley (JonS) detailing their personal confusion.

    The Vista Capable challenge was rooted in the type of ecosystem work we needed to get right. Intel had fallen behind on graphics capabilities while at the same time wanted to use differing graphics as part of their price waterfall. Astute technical readers would also note that Intel’s challenge with graphics was rooted deeply in their failure to achieve critical mass for mobile and the resulting attempt to repurpose their failed mobile chipsets for low-end PCs. PC makers working to have PCs available at different price points loved the idea of hardline differentiation in Windows, though they did not like the idea having to label PCs as basic, hence the XP PCs were labeled “capable.” Also worth noting was that few Windows XP PCs, especially laptops, were capable of the Home Premium experience due to the lack of graphics capabilities. When Vista released, new PCs would have stickers stating they ran Windows Vista or Windows Vista Basic, at least clarifying the single sticker that was placed on eligible Windows XP computers.

    Eventually, the suit achieved class-action status, always a problem for a big company. The fact that much of the chaos ensued at the close of a hectic product cycle only contributed to this failure. My job was to support those on the team that had been part of the dialogue across PC makers, hardware makers, and the numerous marketing and sales teams internally.

    The class-action status was eventually reversed, and the suit(s) reached a mutually agreeable conclusion, as they say. Still, it was a great lesson in the need to repair both the relationships and the communication of product plans with the hardware partners, not to mention to be more careful about system requirements and how features are used across Windows editions.

    In addition to these examples of external issues the Vista team got ahead of itself regarding issues related to code sharing and platform capabilities that spanned multiple groups in Microsoft.

    The first of these was one of the most loved modern Windows products built on top of Windows XP, Windows Media Center Edition (WMC, or sometimes MCE). In order to tap into the enthusiasm for the PC in the home and the convergence of television and PCs, long before smartphones, YouTube, Netflix, or even streaming, the Windows team created a separate product unit (rather than an integrated team) that would pioneer a new user interface, known as the 10-foot experience, and a new “shell” (always about a shell!) designed around using a PC with a remote control to show live television, home DVD discs, videos, and photos on a big screen, and also play music. This coincided with the rise in home theaters, large inexpensive disk drives capable of storing a substantial amount of video, camcorders and digital cameras, and home broadband internet connections. The product was released in 2002 and soon developed a relatively small but cultlike following. It even spawned its own online community called “Green Button,” named after the green button on the dedicated remote control that powered the shell’s 10-foot user interface.

    The product was initially sold only with select PCs because of the need for specific hardware capabilities. Later, with Windows Vista (and Windows 7), WMC was included in the premium editions. The usage based on both sales and the telemetry collected anonymously was low and the repeat usage was a small fraction of even those customers. Nevertheless, there were vocal fans, and we had no plans to give up.

    WMC was hitting real challenges in the market, though, especially in the United States, where television was moving from analog CATV to digital, and with digital came required set-top boxes and a new and not quite available technology called CableCARD, required to decrypt the cable signal. Not only did this make things difficult for WMC, but it made things difficult for anyone wanting to view cable tv, as if the encrypted analog channels were not difficult enough already. Everyone trying to use CableCARD had a story of trying to activate the card that included essentially a debug interface, typing in long hex strings, awaiting a “ping” back from the mysterious “head end.” The future for the innovative TV experience in WMC was looking bleak.

    Additionally, WMC was bumping up against the desires of the Xbox team to expand beyond gaming. The Xbox team had recently unveiled a partnership with the new Netflix streaming service to make it available on Xbox. Some of the key leaders on WMC had moved from Windows to the Xbox organization and began to ask about moving the WMC team over with them.

    At the time, I was up to my elbows in looking at headcount and orgs and was more than happy to move teams out of Windows, especially if it was straightforward, meaning they could just pick up the work and there was no shipping code being left behind unstaffed. This quickly became the first debate, in my entire time at Microsoft, over headcount and budgets because the destination organization was under tight revenue and expense review.

    The WMC team was, surprisingly, hundreds of people, but it also had dependencies on numerous other teams across networking, graphics, and the core user experience. We could easily move the core WMC team but getting a version of WMC integrated with the new engineering system and to-be-developed delivery schedule (which we were planning) was a concern. Of course the team wanted to move to Xbox but had little interest in delivering WMC back to Windows, especially as the overall engineering process changed. They literally thought we would just move all the headcount and team and then create a new WMC team. They had awful visions of being a component on the hook to meet a schedule that was unappealing. We could not just give up on WMC, even with such low usage, without some sort of plan.

    I learned that moving humans and associated budgets was fine. But CollJ had been working with finance on headcount and was told we had to also move revenue, something I had never heard of. I had no idea how that might even work. In Vista, MediaCenter was part of the Home Premium and Ultimate SKU, and no longer a separate product like it was for Windows XP. How could one arrive at the revenue for a single feature that was part of a SKU of Vista? Perhaps back when WMC was a separate product this made sense, but at the time it seemed like an accounting farce. In fact, the judge in the Vista Capable lawsuit even removed class action status because of an inability to determine which customers bought Windows Vista because of which premium features.

    Microsoft had been divided into seven externally reported business segments; each quarterly earnings filing with the SEC reported a segment as though it were an independent business. The result of this was more visibility for the financial community, which was great. Internally these segments did not line up with the emphasis on code-sharing, feature bundling, or shared sales efforts. For example, from a product development perspective there was a lot of code sharing across all products—this was a huge part of how Microsoft developed software. Costs for each segment could never accurately reflect the R&D costs. An obvious example was how much of Windows development could/would/should be counted in the Server and Tools segment, given that so much of the server product was developed in the Windows segment.

    My view was that there was a $0 revenue allocation for any specific feature of Windows—that was the definition of product-market fit and the reality that nobody bought a large software product for any single feature. This was always our logic in Office, even when we had different SKUs that clearly had whole other products to justify the upsell. Office Professional for years cost about $100 more than Office Standard simply for the addition of the Access database. We never kidded ourselves that Access was genuinely worth billions of dollars on its own.

    Over several weeks, however, we had to arrive at some arbitrary number that satisfied finance, accounting, tax, and regulatory people. To do this, we moved the hundreds of people working on WMC to XBox along with a significant amount of Windows unit revenue per theoretical WMC customer. It wasn’t my math, but it added up to a big number moving to the Entertainment segment. In exchange, we hoped we would get back a WMC that had improved enough to satisfy an enthusiastic fanbase, knowing that the focus was shifting to Xbox.

    Given the fanbase externally, including several of the most influential Windows writers in the press, there was a good chance that such a seemingly arbitrary organization move would leak. The move would be viewed by many as abandoning WMC. I used an internal blog post to smooth things over, describing my own elaborate home audio/video system, which included Windows Vista Ultimate and pretty much every Microsoft product. In doing this, I chalked up a ton of points from tech enthusiast employees, showing that I had a knee-deep knowledge of our products—something most assumed an Office person wouldn’t have, perhaps like BillG not understanding why a C++ developer knew Excel 15 years earlier when I first interviewed to work for him. In practice, my post made it clear that keeping this technology working to watch TV casually at home was impossible. I hated my home setup. It was ridiculous.

    As part of the final Longhorn cleanup, I also needed to reconcile the strategic conflicts between the Windows platform and the Developer Tools platform, and as the new person I found myself in the middle. It was a battle that nobody wanted to enter for fear of the implications of making the wrong technology bet.

    The Developer division created the .NET platform starting in the early 2000s to build internet-scale, web server applications delivered through the browser primarily to compete with Java on the server. It was excellent and remains loved, differentiated, and embraced by the corporate world today. It crushed Java in the enterprise market. It was almost entirely responsible for the success Windows Server saw in the enterprise web and web application market.

    The .NET client (desktop programs one would use on a laptop) programming model was built “on top” of the Windows programming model, Win32, with little coordination or integration with the operating system or the .NET server platform. This created a level of architectural and functional complexity along with application performance and reliability problems resulting in a messy message to developers. Should developers build Win32 apps or should they build .NET apps? While this should not have been an either/or, it ended up as such because of the differing results. Developers wanted the easier-to-use tools and language of .NET, but they wanted the performance and integration with Windows that came from Win32/C++. This was a tastes great, less filling challenge for the Developer division and Windows. In today’s environment, there are elements of this on the Apple platforms when it comes to SwiftUI versus UIKit as searching for that debate will find countless blog posts on all sides.

    It was also a classic strategic mess created when a company puts forth a choice for customers without being prescriptive (or complete). Given a choice between A and B, many customers naturally craft the third choice, which is the best attributes of each, as though the code was a buffet. Still other customers polarize and claim the new way is vastly superior or claim the old way is the only true way and until the new way is a complete superset they will not switch. Typical.

    Longhorn aimed to reinvent the Win32 APIs, but with a six-year gap from Windows XP it was filled by the above .NET strategy when it came to Microsoft platform zealots. The rest of the much larger world was focused on HTML and JavaScript. At the same time, nearly all commercial desktop software remained Win32, but the number of new commercial desktop products coming to market was diminishingly small and shrinking. Win32 was on already life support.

    The three pillars of Longhorn, WinFS, Avalon, and Indigo, failed to make enough progress to be included in Vista (together these three technologies were referred to as WinFX.) With Vista shipping, each of these technologies found new homes, or were shut down. I had to do this last bit of Vista cleanup which lingered long after the product was out the door.

    WinFS receded into the world of databases where it came from. As discussed, it was decidedly the wrong approach and would not resurface in any way. Indigo was absorbed into the .NET framework where it mostly came from. Avalon, renamed Windows Presentation Foundation (WPF), remained in the Windows Client team, which meant I inherited it.

    WPF had an all-expansive vision that included a unique language (known as XAML) and a set of libraries for building graphical and data-rich applications. Taken together and to their logical end point, the team spoke of Avalon as being a replacement for HTML in the browser and also .NET on the client. This was the reason work had stopped on Internet Explorer, as the overall Longhorn vision was to bring this new level of richness to browsing. From the outside (where I was in Office) it seemed outlandish, and many on the outside agreed particularly those driving browser standards forward. Still, this opened a second front in the race to improve Win32, in addition to .NET. All along WPF would claim allegiance and synergy with .NET but the connections were thin, especially as WPF split into a cross-platform, in-browser runtime and a framework largely overlapping with .NET. When I would reflect on WPF I would have flashbacks to my very first project, AFX, and how we had created a system that was perfectly consistent and well-architected yet unrelated to everything, in addition to being poorly executed.

    But what should developers have used—classic Win32, or the new frameworks of .NET, WPF, or something else I just learned about called Jolt? These were big decisions for third-party developers because there was an enormous learning curve. They had to choose one path and go with it because the technologies all overlapped. That was why I often called these frameworks “fat frameworks,” because they were big, slow, and took up all the conceptual room. Importantly, the common thread among the frameworks was their lack of coherence with Win32—they were built on top of Win32, duplicating operating system capabilities. Fat was almost always an insult in software.

    The approach taken meant the idea of both .NET on the client and WPF were built on the shakiest of foundations, and what we were seeing with .NET on the client was as predicted based on all past experiences. I had a very long and ongoing email thread (which I later turned into an Office Hours blog post) with the wonderfully kind head of Developers, S. Somasegar (Somase, aka Soma), on this topic where I pushed and pushed on the challenges of having fat frameworks emerge while we have a desire to keep Windows relevant. As I wrote the email, I was struck by how similar this experience was to the one I had twenty years earlier as we built, and then discarded, the fat framework called AFX. While history does not repeat itself, it does rhyme. Many would debate the details, but fundamentally the team took the opposite path that we did years earlier. I recognize writing this today that some are still in the polarized camp and even today remain committed to some of these technologies. One of the most successful tools Microsoft ever released was Visual Basic, which was an enormously productive combination of framework, programming language, and development tool. That was not what we had here.

    Nothing with developers and Microsoft was ever simple. Microsoft’s success with developers was often quite messy on the ground.

    Along with large product development teams, there was also a large evangelism team responsible for gaining support or traction of new technologies with developers. This team was one of the gems of Microsoft. It masterfully built a community around developer initiatives. It was largely responsible for taking the excellent work in .NET for Server and landing it with thousands of enterprise developers. In fact, part of that challenge was that the evangelism team had moved to the Developer division where the org chart spoke more loudly than the overall company mission and the priority and resourcing of evangelism tilted heavily toward .NET in all forms over the declining Win32 platform.

    As a counterexample, the evangelism team seeing the incoherence in product execution provided significant impetus to force the Windows NT product to fully adopt the Windows API paving the way for Microsoft’s Win32 success. I previously shared my early days story as a member of the C++ team pointing out how Windows NT differed from classic Windows APIs in Chapter II, I Shipped, Therefore I am. Many of the lessons in how divergent the two Windows APIs were surfaced in the well-known Windows NT porting lab run by the evangelism team, where developers from around the world would camp out for a few weeks of technical support in moving applications to Windows NT or Windows 95, before both shipped.

    Perhaps an organization-driven result was a reasonable tradeoff, but it was never explicit, as was often the case with cross-divisional goals. In many ways the challenges were accelerated by our own actions, but without ever making that a clear goal we would spend too much time in meetings dancing around the results we were seeing in the erosion of Win32.

    The evangelism team didn’t ever fail at their mission and reliably located or created champions for new technologies. It seemed there were always some outside Microsoft looking to get in early on the next Microsoft technologies. The evangelism team was expert at finding and empowering those people. They could summon the forces of the book, consulting, and training worlds to produce volumes of materials—whole books and training courses on XAML were available even before Vista was broadly deployed. Although WPF had not shipped with any product, it had a strong group of followers who trusted Microsoft and made a major commitment to use WPF for their readers, clients, or customers (as authors, consultants, or enterprise developers). Before Vista shipped, WPF appeared to have initial traction and was a first-class effort, along with .NET on the client. WPF had an internal champion as well. The Zune team used early portions of WPF for software that accompanied their ill-fated, but well-executed, iPod competitor.

    Things were less clear when it came to WPF and Vista. WPF code would ship with Vista, but late in the product cycle the shiproom command came down that no one should use WPF in building Vista because of the burden it would place on memory footprint and performance of applications. This caused all sorts of problems when word spread to evangelists (and their champions). People were always looking for signals that Microsoft had changed its mind on some technology. Seeing the risk to WPF by not being performant, the Avalon team (the team was also called Avalon) set out to shrink WPF and XAML into a much smaller runtime—something akin to a more focused and scenario specific product. This was a classic Windows “crisis” project and was added to the product plans sometime in 2005 or early 2006, called Jolt, while the rest of Longhorn was just trying to finish with quality.

    Jolt was designed to package up as much of WPF as could fit in a single ActiveX control, also called WPF/E for WPF everywhere, and then later in final form called Silverlight. This would make it super easy to download and use. Streaming videos and small graphical games to be used inside of a browser became the focus. Sound like Adobe Flash? Yes, Jolt was being pitched internally as Microsoft’s answer to Adobe Flash. To compete effectively with Flash, Jolt would also aim to be available across operating systems and browsers—something that made it even less appealing to a Windows strategy, and more difficult to execute. I was of the view that Adobe Flash was on an unsustainable path simply because it was both (!) a fat framework and also an ActiveX control. By this time, ActiveX controls, which a few years earlier were Microsoft’s main browser extensibility strategy, had come to be viewed as entirely negative because they were not supported in other browsers and because they were used as malware by tricking people into running them. The technical press and haters loved to refer to ActiveX as CaptiveX. As an aside, one of my last projects working on C++ was to act overly strategic and push us to adopt the predecessor to ActiveX, known as OLE Controls, and implement those in our C++ library affording great, but useless, synergy with Visual Basic.

    For me, this counted as two huge strikes against Jolt. Imagine a strategic project, at this stage in the history of the company, that came about from a crisis moment trying to find any code to ship while also using the one distribution method we had already condemned (for doing exactly the same thing previously.) I did not understand where it was heading.

    Somehow, I was supposed to reconcile this collection of issues. When I met with the leaders of the team, they were exhausted though still proud of what they had accomplished. When I say exhausted, I mean physically drained. The struggle they had been through was apparent. Like many who had worked on the three pillars of Vista, the past few years had been enormously difficult. They wanted to salvage something of their hard work. I couldn’t blame them. At the same time, those weren’t the best reasons to ship code to hundreds of thousands of developers and millions of PCs without a long-term strategy for customers. My inclination was to gently shut down this work rather than support it forever knowing there was no roadmap that worked.

    The team, however, had done what Windows teams did often—evangelized their work and gained commitments to foreclose any attempt to shut down the effort. With the help of the evangelism team, they had two big customers lined up in addition to the third parties that the evangelism group had secured. In addition to Zune, the reboot of the Windows Phone (which would become Windows Phone 7) would have a portions of the developer strategy based on Jolt—not the phone itself, but it would use Jolt as a way to make it easy for developers to build apps for the phone operating system (prior to this time, apps for the phone were built to basically use ancient Windows APIs that formed the original Windows CE operating system for phones). The Developer division wanted to bring WPF and Indigo into the .NET framework and create one all-encompassing mega-framework for developers but branded as the new version of .NET. The way the .NET framework generally addressed strategic issues was to release a new .NET that contained more stuff under the umbrella of a new version with many new features, even if those new features strategically overlapped with other portions of .NET.

    Given all this, the choice was easy for me. As they requested, the Phone team and the Developer division took over responsibility for the Jolt and WPF teams, respectively. It was a no-brainer. Eventually the code shipped with Windows 7 as part of a new .NET framework, which planned anyway. Most everyone on the Windows team, particularly the performance team in COSD and the graphics team in WEX, were quite happy with all of this. The Windows team had always wanted to focus on Win32, even though there was little data to support such a strategy.

    While this decision clarified the organization and responsibility, it in no way slowed the ongoing demise of Windows client programming nor did it present a coherent developer strategy for Microsoft. The .NET strategy remained split across WPF and the old .NET client solutions, neither of which had gained or were gaining significant traction—even with so much visible support marshalled by the evangelism team. Win32 had slowed to a crawl, and we saw little by way of new development. It was discouraging.

    Again, many reading this today will say they were (or remain) actively developing on one or the other. My intent isn’t to denigrate any specific effort that might be exceedingly important to one developer or customer, but simply to say what we saw happening in total across the ecosystem as evidenced by the data we saw from the in-market telemetry. One of the most difficult challenges with a developer platform is that most developers make one bet on a technology and use it. They do not see histograms or pie charts of usage because they are 100% committed to the technology. They are also vocal and with good reason, their livelihoods depend on the ongoing health of a technology.

    With everything to do with developers, APIs, runtimes, and the schism in place, the problem or perhaps solution was this was all “just code,” as we would say. What that means is twofold. First, there was always a way to demonstrate strategy or synergy. In a sense, this was what we’d disparagingly call stupid developer tricks. These were slides, demonstrations, or strategic assertions that showed technical relationships between independently developed, somewhat-overlapping, and often intentionally different technologies. The idea was to prove that the old and new, two competing new, or two only thematically connected technologies were indeed strategically related. My project to support OLE Controls in C++ was such a trick.

    Technically these tricks were the ability to switch languages, use two strategies at the same time, or tap into a Win32 feature from within some portion of something called .NET. A classic example of this was during the discussion about where Jolt should reside organizationally. It was pointed out that Jolt had no support for pen computing or subsequently touch (among other things) since there was none in .NET or WPF to begin with. These were both key to Windows 7 planning efforts. Very quickly the team was able to demonstrate that it was entirely possible to directly call Win32 APIs from within Jolt. This was rather tautological, but also would undermine the cross-platform value proposition of Jolt and importantly lacked tools and infrastructure support from Jolt.

    Second, this was all “just code,” which meant at any time we could change something and maybe clean up edge cases or enable a new developer trick. Fundamentally, there was no escaping that Win32, .NET, WPF, and even Jolt were designed separately and for different uses with little true architectural synergy. Even to this day, people debate these as developers do—this is simply a Microsoft-only version of what goes on across the internet when people debate languages and runtimes. Enterprise customers expect more synergy, alignment, and execution from a single company. More importantly, developers making a bet on the Microsoft platform expect their choices to be respected, maintained, and enhanced over time. It is essentially impossible to do that when starting from a base as described, and as Microsoft amassed from 2000 to 2007.

    As simple as it was to execute moving these teams, in many ways it represented a failure on my part, or, more correctly, a capitulation. I often ask myself if it would have been better to wind down the efforts rather than allow the problem to move to another part of the company. I was anxious to focus and move on, but it is not clear that was the best strategy when given an opportunity to bring change and clarity when developers so clearly were asking for it, even if it meant short-term pain and difficulty. It would have been brutal.

    It grew increasingly clear that there were no developers (in any significant number, not literally zero) building new applications for Windows, not in .NET or WPF or Win32. The flagship applications for Windows, like Word, Excel, and others from Microsoft along with Adobe Photoshop or Autodesk AutoCAD, were all massive businesses, but legacy code. The thousands of IT applications connecting to corporate systems, written in primarily Visual Basic, all continued daily use but were being phased out in favor of browser-based solutions for easier deployment, management, portability, and maintenance. They were not using new Windows features, if any existed. The most active area for Windows development was gaming, and those APIs were yet another layer in Windows, called DirectX, which was part of WEX and probably the most robust and interesting part of Win32. Ironically, WPF was also an abstraction on top of those APIs. ISVs weren’t using anything new from Microsoft, so it wasn’t as though we had a winning strategy to pick from.

    Further evidence of the demise of Win32 arose as early as 1996 from Oracle’s Larry Ellison, who put forth the idea of a new type of browser-only computer, the Network Computer. (Sound like a Chromebook?) At the time, Marc Andreessen famously said that Windows would be become, simply, a “poorly debugged set of device drivers,” meaning that the only thing people would care about would be running the browser. Years later, Andreessen would point out the original was based on something Bob Metcalf had said. Eight years later we had reached the point where the only new interesting mainstream Windows programs were browsers, and at this time only Firefox was iterating on Windows and the only interesting thing about Firefox was what it did with HTML, not Windows. The device drivers still had problems in Vista. In fact, that was the root of the Vista Capable lawsuit!

    Moving WPF and Jolt to their respective teams, while an admission of defeat on my part, could best be characterized as a pocket veto. These were not the future of Windows development, but I wasn’t sure what would be. We in Windows were doubling down on the browser for sure, but not as leaders, rather as followers. We had our hands full trying to debug the device drivers.

    XAML development continues today, though in a much different form. While it does not make a showing in the widely respected Stack Overflow developer survey of over 80,000 developers in 181 countries, it maintains a spot in the Microsoft toolkit. XAML will come to play an important role in the development of the next Windows release as well.

    With the team on firm(er) ground and now moving forward we finally started to feel as though we had gained some control. By September 2007 we were in beta test with the first service pack for Vista, which OEMs and end-users anxiously awaited.

    The team was in full execution mode now and we had milestones to plow through. While I felt we were heading in the right direction and cleared the decks of obvious roadblocks, there was a looming problem again from Cupertino. What was once a side bet for Microsoft would prove to be the most transformative invention of the 21st century…from Apple.

    On to 092. Platform Disruption…While Building Windows 7 [Ch. XIII]



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • Advertising is much more difficult than just about everyone believes to be the case. In fact, one of the most challenging tasks for any executive at any company is to step back and not get involved in advertising. It is so easy to have opinions on ads and really randomize the process. It is easy to see why. Most of us buy stuff and therefore consume advertising. So it logically follows, we all have informed opinions, which is not really the case at all. Just like product people hate everyone having opinions on features, marketing people are loathe to deal with a cacophony of anecdotes from those on the sidelines. Nothing would test this more for all of Microsoft than Apple’s latest campaign that started in 2006. I’d already gone through enough of watching advertising people get conflicting and unreconcilable feedback to know not to stick my nose in the process.

    Back to 089. Rebooting the PC Ecosystem

    Hardcore Software by Steven Sinofsky is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

    “I’m a Mac.”

    “And, I’m a PC.”

    The “Get a Mac” commercials, starting in 2006, changed the competitive narrative overnight and were a painful gut punch welcoming me to Windows.

    They were edgy, brutal in execution, and they skewered Windows with facts. They were well done. (Though it always bothered me that PC Guy had a vague similarity to the late and much-loved Paul Allen, Microsoft’s cofounder long focused on science and philanthropy.) Things that drove Windows fans crazy like “no viruses” on Mac were not technically true, but true in practice because, mostly, why bother writing viruses for Macs with their 6 percent share? That’s what we told ourselves. In short, these commercials were devastating.

    They probably bumped right up against a line, but they effectively tapped into much of the angst and many of the realities of PC ownership and management. Our COO and others were quite frustrated by them and believed the commercials not only to be untrue, but perhaps in violation of FTC rules.

    They were great advertising. Great advertising is something Apple seemed to routinely accomplish, while Microsoft found it to be an elusive skill.

    For its first twenty years, Microsoft resisted broad advertising. The company routinely placed print ads in trade publications and enthusiast magazines, with an occasional national newspaper buy for launches. These ads were about software features, power, and capabilities. Rarely, if ever, did Microsoft appeal to emotions of buyers. When Microsoft appeared in the national press, it was Bill Gates as the successful technology “whiz kid” along with commentary on the growing influence and scale of the company.

    With that growing influence in the early 1990s and a business need to move beyond BillG, a huge decision was made to go big in advertising. Microsoft retained Wieden+Kennedy, the Portland-based advertising agency responsible for the “Just Do It” campaign from Nike, among many era-defining successes. After much consternation about spending heavily on television advertising, Microsoft launched the “Where do you want to go today?” campaign in November 1994.

    Almost immediately we learned the challenges of advertising. The subsidiaries were not enamored with the tagline. The head of Microsoft Brazil famously pushed back on the tagline, saying the translation amounted to saying, “do you want to go to the beach today?” because the answer to the question “Where do you want to go?” in Brazil was always “the beach.” The feedback poured in before we even started. It was as much about the execution as the newness of television advertising. Everyone had an opinion.

    I remember vividly the many pitches and discussions about the ads. I can see the result of those meetings today as I rewatch the flagship commercial. Microsoft kept pushing “more product” and “show features” and the team from W+K would push back, describing emotions and themes. The client always wins, and it was a valuable lesson for me. Another a valuable lesson, Mike Maples (MikeMap) who had seen it all at IBM pointed out just before the formal go ahead saying something like “just remember, once you start advertising spend you can never stop. . .with the amount of money we are proposing we could hire people in every computer store to sell Windows 95 with much more emotion and information. . .” These were such wise words, as was routine for Mike. He was right. You can never stop. TV advertising spend for a big company, once started, becomes part of the baseline P&L like a tax on earnings.

    The commercials were meant to show people around the world using PCs, but instead came across almost cold, dark, and ominous as many were starting to perceive Microsoft. That was version 1.0. Over the next few years, the campaign would get updates with more colors, more whimsy, and often more screenshots and pixels.

    What followed was the most successful campaign the company would arguably execute, the Windows 95 launch. For the next decade, Microsoft would continue to spend heavily, hundreds of millions of dollars per year, though little of that would resonate. Coincident with the lukewarm reception to advertising would be Microsoft’s challenges in branding, naming, and in general balancing speeds and feeds with an emotional appeal to consumers. Meanwhile, our enterprise muscle continued to grow as we became leaders in articulating strategy, architecture, and business value.

    In contrast, Apple had proven masterful at consumer advertising. From the original 1984 Superbowl ad through the innovative “What’s on your PowerBook?” (1992) to “Think Different” (1997-2000) and many of the most talked about advertisements of the day such as “C:\ONGRTLNS.W95” in 1995 and “Welcome, IBM. Seriously” in 1981, Apple had shown a unique ability to get the perfect message across. The only problem was that their advertising didn’t appear to work, at least as measured by sales and/or market share. The advertising world didn’t notice that small detail. We did.

    Starting in 2006 (Vista released in January 2007), Apple’s latest campaign, “Get a Mac” created an instant emotional bond with everyone struggling with their Windows PC at home or work, while also playing on all the stereotypes that existed in the Windows v. Mac battle—the nerdy businessman PC slave versus the too cool hipster Mac user.

    The campaign started just as I joined Windows. I began tracking the commercials in a spreadsheet, recording the content and believability of each while highlighting those I thought particularly painful in one dimension or another. (A Wikipedia article would emerge with a complete list, emphasizing the importance of the commercials.) I found myself making the case that the commercials reflected the state of Windows as experienced in the real world. It wasn’t really all that important if Mac was better, because what resonated was the fragility of the PC. There was a defensiveness across the company, a feeling of how the “5% share Mac” could be making these claims. I managed a bit of a row with the COO who wanted to go to the FTC and complain that Apple was not telling the truth.

    Windows Vista dropped the ball. Apple was there to pick it up. Not only with TV commercials and ads, but with a resurging and innovative product line, one riding on the coattails of Wintel. The irony that the commercials held up even with the transition to Intel and a theoretically level playing field only emphasized that the issue was software first and foremost, not simply a sleek aluminum case.

    While the MacBook Air was a painful reminder of the consumer offerings of Windows PCs, the commercials were simply brutal when it came to Vista. There were over 50 commercials that ran from 2006-2009, starting with Apple’s transition to Intel and then right up until the release of Windows 7 when a new commercial ran on the eve of the Windows 7 launch. Perhaps the legacy of the commercials was the idea that PCs have viruses and malware and Macs do not. No “talking points” about market share or that malware targets the greatest number of potential victims or simply that the claim was false would matter. There’s no holding back, this was a brutal takedown, and it was effective. It was more effective in reputation bashing, however, than in shifting unit share.

    One of the most memorable ones for me was “Security” which highlighted the Windows Vista feature designed to prevent viruses and malware from sneaking on your PC, called User Account Control or UAC which had become a symbol of the annoyance of Vista—so much so our sales leaders wanted us to issue a product change to remove it. There’s some irony in that this very feature is not only implemented across Apple’s software line today, but far is more granular and invasive. That should sink in. Competitively we all seem to become what we first mock.

    Something SteveB always said when faced with sales blaming the product group and the product group blaming sales for something that wasn’t working, was “we need to build what we can sell, and sell what we build.” Windows Vista was a time where we had the product and simply needed to sell what we had built, no matter what.

    The marketing team (an organization peer to product development at this time) was under a great deal of pressure to turn around the perceptions of Vista and do something about the Apple commercials. It was a tall order. The OEMs were in a panic. It would require a certain level of bravery to continue to promote Vista, perhaps not unlike shipping Vista in the first place. The fact that the world was in the midst of what would become known as the Global Financial Crisis with PC sales taking a dive did not help.

    Through a series of exercises to better come up with a point of attack the team came up with the idea that maybe too many people were basing their views of Windows Vista on hearsay and not actual experience. We launched a new campaign “Mojave Experiment” that took a cinéma vérité approach to showing usability studies and focus groups for yet to be released and experimental version of Windows Mojave. After unsolicited expressions of how bad Vista was, the subjects were given a tour of Windows Mojave. The videos were not scripted, and the people were all real, as were the reactions. Throughout the case study, and the associated online campaign, the subjects loved what they were being shown. Then at the end of the videos, the subjects were told the surprise—this was Windows Vista. To those old enough to know, it had elements of the old “Folger’s Coffee” or “I Can’t Believe It’s Not Butter” commercials, classic taste-test advertising.

    The industry wasn’t impressed. In fact, many took to blog posts to buzzsaw the structure of the tests and way subjects were questioned and shown the product. One post in a Canadian publication ran with the headline “Microsoft thinks you’re stupid” in describing the campaign. This was right on the heels of the Office campaign where we called our customers dinosaurs. We had not yet figured out consumer advertising.

    We still had to sell the Vista we had built. We needed an approach that at the very least was credible and not embarrassing, but importantly at least hit on the well-known points about Apple Macintosh that everyone knew to be true. Macs were expensive and as a customer your choices were limited.

    Apple’s transition to Intel was fascinating and extraordinarily well executed, releasing PCs that were widely praised, featuring state-of-the-art components, an Intel processor unique to Apple at launch, and superior in design and construction to any Windows PC. The new premium priced Intel Macs featured huge and solid trackpads, reliable standby and resume, and super-fast start-up. All things most every Windows PC struggled to get right. We consistently found ourselves debating the futility of the Apple strategy offering expensive hardware. The OEMs weren’t the only ones who consistently believed cheaper was better, but that was also baked into how Microsoft viewed the PC market. Apple had no interest in a race to profitless price floors and low margins, happily ceding that part of the market to Windows while selling premium PCs at relatively premium prices. In fact, their answer to the continued lowering of PC prices was to release a pricey premium PC.

    The original MacBook Air retailed for what seemed like an astonishing $1,799. That was for the lowest specification which included a 13” screen and a meager 2GB main memory, an 80GB mechanical hard drive, and a single USB port and obscure video output port, without an DVD drive or network port. For an additional $1000, one could upgrade to a fancy new solid state disk drive which was still unheard of on mainstream Windows PCs.

    As it would turn out the MacBook Air was right in the middle of the PC market, and that’s just how PC makers liked it, stuck between the volume PC and premium PC, neither here nor there. An example of most popular laptop configuration was the Dell Inspiron 1325. The 1325 was a widely praised “entry level” laptop with an array of features, specs, and prices. In fact, on paper many PC publications asked why anyone would buy an overpriced Macintosh. The Dell 1325 ranged in prices from $599 to about $999 depending on how it was configured. The configuration comparable to a MacBook Air was about $699 and still had 50% more memory and three times the disk space. As far as flexibility and ports, the 1325 featured not just a single USB port but two, a VGA video connector, audio jacks, Firewire (for an iPod!), an 8 in 1 media card reader, and even something called an ExpressCard for high-speed peripherals. Still, it was a beast, while the same width and length it was twice as thick and clearly more dense weighing almost 5lbs at the base configuration compared to the 3lb MacBook Air. As far as battery life, if you wanted to be comparable to the Air then you added a protruding battery that added about a pound in weight and made it so the laptop wouldn’t fit in a bag. Purists would compare to the MacBook (not the Air), as we did in our competitive efforts, but the excitement was around the Air. The regular 13” MacBook weighed about 4.5lbs and cost $1299, which would make it a more favorable comparison. It was clear to me that the Air was the future consumer PC as most PC users would benefit from lighter weight, fewer ports, and a simpler design. As much as I believed this, it would take years before the PC industry broadly recognized that “thin and light” was not a premium product. The MacBook Air would soon end up priced at a $999 entry price, which is when it began to cause real trouble for Windows PCs.

    The higher-end MacBook Air competitor from the PC world was the premium M-series from Dell. Incidentally, I’m using Dell as an example, HP and Lenovo would be similar in most every respect. The Dell XPS M1330, the forerunner to today’s wonderful Dell XPS 13, was a sleeker 4lbs also featuring a wedge shape. With the larger and heavier battery there was a good 5 hours of runtime. Both Dells featured plastic cases with choices of colors. It too had models cheaper than the MacBook or MacBook Air but could be priced significantly more by adding more memory, disk storage, better graphics, or a faster CPU.

    A key factor in the ability for the Mac to become mainstream, however, was the rise in the use of the web browser for most consumer scenarios. A well-known XKCD cartoon featured two stick figures, one claiming to be a PC and another claiming to be a Mac, with the joint text pointing out “and since you do everything through a browser now, were pretty indistinguishable.” Apple benefitted enormously from this shift, or disruption, especially as Microsoft continued to invest heavily in Office for the Mac. The decline in new and exciting Windows-based software described in the previous section proved enormously beneficial to Apple when it came to head-to-head evaluation of Mac versus Windows. Simply running Office and having a great browser, combined with the well-integrated Apple software for photos, music, videos, and mail proved formidable, and somewhat enduring with the rise in non-PC devices.

    We were obsessed with the pricing differences. We often referred to these higher prices as an “Apple Tax” and even commissioned a third party to study the additional out of pocket expenses for a typical family when buying Macs versus Windows PCs. A whitepaper was distributed with detailed comparison specifications showing the better value PCs offered. In April 2008 we released a fake tax form itemizing (groan) the high cost of Apple hardware.

    From our perspective or perhaps rationalization this was all good. Consumers had choice, options, and flexibility. They could get the PC they needed for their work and pay appropriately, or not. This thesis was reinforced by the sales of both PCs and Macs no matter what anyone was saying in blogs. The PC press loved this flexibility. Retailers and OEMs relied on the variety of choices to maximize margin. Retailers in particular struggled with Apple products because they lacked key ways to attach additional margin, such as upsell or service contracts, not to mention Apple’s lack of responsiveness to paying hefty slotting and co-advertising fees.

    Choosing a PC while a complicated endeavor was also the heart and soul of the PC ecosystem. Once Apple switched to Intel, there was a broad view that the primary difference between Mac and Windows now boiled down to the lack of choice and high price and lack of a compatible software and peripheral ecosystem that characterized Macintosh.

    To make this point, Microsoft launched a new campaign the “Laptop Hunter” that ran in 2009. In these ads, typical people are confronted outside big box retailers trying to decide what computer to buy. A PC or a Mac? In one ad, “Lauren” even confesses she is not cool enough for a Mac while noticing just how expensive they are (NB, Lauren is almost the perfect representation of a Mac owner.) She heads over to a showroom with a vast number of choices and whittles her way down to a sub $1000 PC with everything she needs. Another success.

    Not to belabor this emerging theme, but no one believed these ads either. Only this time, the critics and skeptics were livid as it appeared Lauren was an actress and that called into question the whole campaign. In addition, Apple blogs went frame-by-frame in the ad to “prove” various aspects of the shoot were staged or simply not credible. The tech blogs pointed out the inconsistencies or staged aspects of Lauren’s requirements as being designed to carefully navigate the Apple product line.

    Laptop Hunter offers some insight into how addictive (or necessary) television advertising became and the scale at which Microsoft was engaging. The television campaign debuted during the NCAA basketball tournament in the US, prime time. It was supported by top quality network prime shows (Grey's Anatomy, CSI, The Office, Lost, American Idol), late night staple programming (Leno, Letterman, Kimmel, Conan, and Saturday Night Live) and major sports events and playoff series (NCAA Basketball, the NBA, MLB, and the NHL). Cable networks included Comedy Central, Discovery, MTV, VH1, History and ESPN. The online campaign included home page execution on NYTimes.com, as well as “homepage take-overs” (the thing to do back then) on WSJ, Engadget, and CNN. We also supported this with an online media buy targeted at reaching people who were considering a non-Windows laptop (a Mac). We linked from those banner ads to a dedicated Microsoft web site designed to configure a new PC and direct to online resellers, closing the loop to purchase. The level of spending and effort was as massive as the upside.

    I could defend the advertising but at this point I am not sure it is worth the words. Besides, Apple responded to the ad with a brutal reiteration of viruses and crashes in PCs and that lots of bad choice is really no choice at all. It is rare to see two large companies go head-to-head in advertising and you can see how Microsoft took the high road relative to Apple, deliberately so. The ads worked for Apple, but almost imperceptibly so in the broader market. Apple gained about one point of market share, which represented over 35% growth year over year for each of the two years of the campaign—that is huge. The PC market continued to grow, though at just over 10%. Still that was enough of a gain to ameliorate the share gains from Apple, which were mostly limited to the US and western Europe.

    As much as the blowback from this campaign hurt, we were at least hitting a nerve with Apple fans and getting closer to a message that resonated with the PC industry: compatibility, choice, and value.

    For our ad agency, essence of the PC versus Mac debate boiled down not to specs and prices but to a difference in the perceived customers. The Mac customers (also the agency itself) seemed to be cut from one mold of young, hip, artistic whereas the PC was literally everyone else. It seemed weird to us, and our advertising agency, that Windows computers were not given credit for the wide array of people and uses they supported, if even stereotypically. We were proud of all the ways PCs were used.

    To demonstrate this pride, Bill Veghte (BillV) the senior vice president of Windows marketing (also reporting to Kevin Johnson) led the creation of a new “I’m a PC” campaign that started in the fall of 2008 and ran through the launch of Windows 7. Rather than run from the Apple’s “I’m a Mac” we embraced it. The main spot featured fast cuts of people from all walks of life, including members of the Microsoft community as well as some pretty famous people, talking about their work, creations, and what they do with PCs. The ads featured a Microsoft employee, Sean Siler a networking specialist from Microsoft Federal, who looked unsurprisingly like the stereotype PC users portrayed by Apple. These ads were us.

    The advertising world viewed success through the creative lens so dominated by Apple. The ads were well-received and for the first time we landed spots (costing hundreds of millions of dollars) that we could both be proud while emphasizing our strengths.

    The memorable legacy of the campaign would be the brightly colored “I’m a PC” stickers that nearly everyone at the company dutifully attached to their laptops. Meeting rooms filled with open laptops of all brands, colors, sizes all displayed the sticker. We made sure all of our demo machines featured the stickers as well. In the summer 2009 global sales meeting just before Windows 7 would launch, BillV led the sales force in a passionate rally around “I’m a PC” and the field loved it. He was in his element, and they were pumped. The Windows 7 focused versions of this campaign featured individuals talking about their work saying “I’m a PC and Windows 7 was my idea” building on the theme of how Windows 7 better addressed customer needs (more on that in the next chapter.)

    By the summer of 2009, the back and forth with Apple seemed to run its course as we were close to the launch of Windows 7 (more on that in the next chapter.) The New York Times ran a 3000-word, front of the Sunday business section story titled “Hey, , PC, Who Taught You to Fight Back?” covering what was portrayed as an “ad war, one destined to go down in history with the cola wars of the 1980s and ’90s and the Hertz-Avis feud of the 1960s.” There was even a chart detailing the escalating advertising spend of the two companies. The story even noted that the ads caught the attention of Apple who pulled their ads from the market only to return with new “Get a Mac” ads criticizing Microsoft’s ads. In the world of advertising, that counts as a huge victory. On the store shelves, the campaign finally seemed to at least slow the share loss to Apple worldwide and definitely pushed it back in the US.

    Nothing hit home more a few years later than the photo of the White House situation room in May 2011 during the raid on Osama Bin Laden. That photo was captioned by the internet to illustrate the point of just who is a Mac and who is a PC. The meme featured some barefooted hipsters in a coffee shop captioned “I’m a Mac” and then the situation room featured secured PCs captioned “I’m a PC” with the heading “Any Questions?” We loved it. That seriousness was what we were all about.

    Of course, the real battle with Apple was now about software. Windows 7 needed to execute. We needed to build out our services offerings for mail, calendar, storage, and more where Apple was still flailing even more than we were.

    While I was entirely focused on Windows 7 and moving forward, the ghosts of Vista and Longhorn would appear. Promises were made that ultimately could not be kept and we had to work through those.

    On to 091. Cleaning Up Longhorn and Vista

    Postscript. The “Get a Mac” ad that hit me the hardest for non-product reasons was the “Yoga” spot, which was funny to me because when I moved to Windows in March 2006 I switched to practicing yoga after a decade of Pilates. In the spot, PC guy switches from yoga to Pilates.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com
  • The word ecosystem is often used when describing Windows and the universe of companies that come together to deliver Windows PCs and software. Providing a platform is a much trickier business than most might believe. Bringing together a large number of partners, along with their competitors, who might share one large goal but differ significantly in the tactics to use to achieve it is fraught with conflict. The Windows ecosystem had been dealt a series of painful blows over the years resulting in a loss of trust and collective capability. Where partnerships were required, the ecosystem had become a collection of…adversaries…or direct competitors…or conflicting distribution channels.

    Back to 088. Planning the Most Important Windows Ever

    In the summer of 2007 six-months after Windows Vista availability as we rolled out the Windows 7 product vision to the entire team, the press coverage for Vista began to get brutal. I say the coverage, but this is really about the customer reaction to the product and the press simply reflected that.

    Then the OEMs, large public companies with shareholders and quarterly earnings, started to do something almost unimaginable. They began to speak out about the problems with Vista. In a widely covered interview in the Financial Times discussing quarterly results, Acer president Gianfranco Lanci lashed out at Vista, saying the “whole industry is disappointed with Windows Vista" and stated that Vista had stability problems and he doubted that Microsoft would remedy the issues within the next six months. He went on to suggest that customers really wanted Windows XP knowing that in just a few months XP would be discontinued. Much of what he said in public, the OEMs had been telling us in private.

    I had been in my role for more than a year and still did not have much to say about what was coming next. How could I? We just didn’t know and all my experience with customers told me that claiming I didn’t know would not be acceptable or even credible. Still, pressure was mounting to show progress.

    The Windows (or PC) ecosystem is made up Intel, PC makers (the OEMs), and creators hardware components and peripherals known as Independent Hardware Vendors (IHVs). OEMs accounted for the vast majority of Microsoft’s extremely lucrative Windows business. IHVs were the key ingredients the OEMs counted on for innovation. Intel, provider of CPUs and an increasing portion of the main componentry, was the most central in the hardware ecosystem as half of the legendary Wintel partnership.

    Suffice it to say ecosystem was in an unhappy and untrusting state after years of product delays along with a series of feature, product, and pricing miscues. The delays had been extremely painful to OEMs and Intel, who counted on a new release of Windows to show off new PCs and drive growth in PC sales. IHVs had critical work to do enabling new PCs. They had to build software drivers that enabled new hardware that was compatible with the new features of Windows (such as 64-bit Windows or new security features). They also faced the increasing complexity of maintaining drivers for old versions of Windows as the time between releases increased and customers demanded new hardware support on both old and new platforms. The rise of Linux was spreading the Windows ecosystem further as demand for Linux support increased. Fractures were everywhere.

    Intel contributed an increasingly larger portion of the PC, much as Microsoft continued to add features to Windows. With Centrino® for example, Intel added WiFi support directly to the components they provided to PC makers. This greatly expanded and standardized the use of WiFi in laptops. Intel was in the process of broadening their support for graphics as well, continuing to improve the integrated graphics chips they provided to PC makers (this was a particularly sore spot with Vista.) Intel was able to better control pricing of their components, encourage specific PC designs, and shift PC makers to various CPU choices using a combination of pricing actions and co-marketing arrangements. Originally these “Intel Inside” advertising efforts were a huge part of how PC makers selected and marketed different models and lines of computers and was enormously profitable for Intel.

    Microsoft’s relationship with Intel had not been particularly happy, going way back to the 1990s when Intel began to embrace Java and other cross-platform technologies. Recently, however, the rise of Linux was viewed by Intel as a large opportunity, while Microsoft perceived it as a competitive threat. What Microsoft used to perceive as a moat, device driver support for Windows, was rapidly fading due to active efforts by Intel to support Linux to the same degree. The specter of desktop Linux seemed to be held back by just a few device drivers should Intel support them, or so we worried.

    There was also a complex relationship between enterprise customers and Microsoft when it came to Windows. From a product and feature perspective, Windows was making increasing bets on the kind of features businesses cared about, such as endpoint control, reliability, and management. Microsoft did not always want to give away those features for free to retail customers where they might not apply. The fact that these features existed caused OEMs to consider ways to add them to the base Windows they sold, feeling that the base Windows was under-powered. In other words, features Microsoft added to premium editions of Windows simply served as starting points for features OEMs might look to provide with their own software on top of the basic editions of Windows, increasing OEM margin.

    The response to the increasing gap between consumers and enterprise was another reason for OEMs to embrace, or at least appear to embrace Linux. At the very least, OEMs began to ponder the idea of selling PCs with free Linux and letting customers pick and choose an operating system on their own as a backdoor way to avoid the Windows license for every single PC. Microsoft did not want OEMs selling PCs without Windows when Windows was destined to be on the PC anyway, as that would only encourage piracy. This “Linux threat” was not as empty as the state of technology would have indicated as OEMs were actively starting to offer Linux on the desktop, especially if they were offering it on Server already. Some markets, such as China, which already had enormously high piracy pursued this path aggressively.

    From the least expensive PCs for the budget conscious to the fanciest PCs for gamers, the Windows product and business depended on these partners, and vice versa. Any business book would tell you just how big a deal the Windows ecosystem was, as would anyone who followed the antitrust actions against Microsoft.

    We needed to reboot the relationship with and across the Ecosystem. Doing so would be an important part of the planning process and run in parallel with planning the Windows 7 product.

    The perspectives, insights, and data from hardware and OEM partners were only part of the input to the planning process. Another part was the usage data from tens of millions of Windows users. What features, peripherals, and third-party programs were used, how often, and by which types of customers. Vista had done excellent work to incorporate the performance and reliability measures of the system, especially for glitches like crashes and hangs, but usage data was inconsistent across the system. As we would discover, this data was not readily available or reliably collected across the whole product. We knew we would need as much as we could for Windows 7 and certainly down the road, so we began in earnest to implement more telemetry across the product during the planning for Windows 7. This data would form a foundation for many discussions with OEMs.

    The feverish pace and tons of work over the six months that followed the re-org tackled an expansive set of potential areas and distilled them down to a product plan, the Windows 7 Vision. That six months would bump up against not one but two OEM selling seasons during which the OEMs would not get the news about “fixing Vista” they were hoping for. The planning memo came out in December 2006 even before Windows Vista availability. That timing, however, would be too late to impact the February selling season. The July 2007 Vision was too late to become requirements for PCs for back to school in August or September of that year. Even though PCs would not ship with Windows 7 for quite some time, every selling season we missed would make it frustratingly difficult for those PCs to be upgraded to Windows 7. This was a problem for every part of the ecosystem including Microsoft.

    The relationships were difficult, but more importantly the quality of work across the ecosystem was in decline. Communication was adversarial at best.

    Mike Angiulo (MikeAng) joined from Office and began an incredible effort at rebuilding the relationships with OEMs. Mike was not only a stellar technologist, but a strategic relationship manager, having grown up as a salesman within his future family’s business. He was also a trained mechanical engineer who rebuilt and raced cars, a natural-born poker champion, and an IR-pilot who built his own plane.

    He recruited Roanne Sones (RSones) to lead the rehabilitation of the relationship with OEM customers. She was a college hire from Waterloo’s prestigious systems engineering program who had joined Office five years earlier. Having worked on projects from creating the Office layperson’s specification, to synthesizing customer needs by segment, to analyzing usage data across the product, Roanne brought a breadth of tools and techniques from Office to the OEM problem space. Also joining the team was Bernardo Caldas (BCaldas) who would bring deep insights combining usage data, financial modeling, and business model thinking to the team.

    Roanne quickly learned that a big part of the challenge would be the varying planning horizons the OEMs required. “Required” because they told us that they needed “final Windows 7 plans” in short order if they were to make the deadlines for a selling season. Whether they needed the information or not, the reality was they were so used to not getting what they wanted they simply made any request urgent and expansive.

    Each OEM was slightly different in how it approached the relationship, but all shared two key attributes. First, they viewed Windows as an adversary. Second, they did not take anything we told them to be the ground truth. They were constantly working the Microsoft organization and their connections, which were deep, across the separately organized OEM sales team, support organizations, and more, to triangulate anything we said to arrive at their own version of truth or reality. They also had no problems escalating to the head of global sales or to SteveB directly, several of whom had known Steve for decades.

    After a decade of missed milestones, dropped features, antitrust concerns, and contentious business relationships, the connection between Microsoft and OEMs was painfully dysfunctional. Considering Windows accounted for so much of Microsoft’s profit, this was a disaster of a situation. It had been going on so long that most inside Microsoft seemed to shrug it off as just a part of the business or “OEMs have always been like this.” It was no surprise to me that the relationships devolved.

    We found little to love in most new PCs while at the same time Microsoft was rooted in a long history of believing, as BillG would say though this is misinterpreted I think, that “ten years out, in terms of actual hardware costs you can almost think of hardware as being free.” He meant that to imply that relative to advancing software capabilities, the hardware resources required would not be a key factor in innovation. I’m not sure OEMs heard it that way.

    What was really going on other than Microsoft’s lack of reliability as a partner, not to dismiss that perfectly legitimate fact? It goes back to the history of PC manufacturing and its evolution to the current situation. When Dell and then Compaq first manufactured PCs, they were rooted in engineering and design of PCs, and each built out manufacturing and distribution channels—PCs made in the US (Texas) and shipped around the world.

    As the industry matured, more and more manufacturing moved to plants in Taiwan and China. Traditionally, components such as hard drives, accessory boards, and motherboards were shipped to the United States from locations around Asia for final assembly, including the addition of the Intel CPU manufactured in the US. Eventually, as more and more were made in Asia, it became increasingly efficient to aggregate components there for assembly as a complete PC, which could then be shipped around the world. This transition is one Tim Cook, now the CEO of Apple, famously took Apple through even though Steve Jobs resisted it.

    Over time, these assembly companies aimed to deliver even more value and a greater share of the PC experience, as they described. They even began to create speculative PCs and sell them to the OEMs, such as Dell and HP for incorporation into their product lines after a bit of customization such as component choice and industrial design. The preference for laptops over desktops made it even more critical to engineer a complete package, which these new original design manufacturer partners became experts at doing. As the name implied, an ODM would both design and manufacture PCs (and many other silicon-based electronics). The ODMs developed such complete operations that some even created consumer-facing brands to sell PCs as a first party, in search of margins and revenue.

    In a sense, the headquarters of an OEM was the business operation and the ODM the product and manufacturing arm, managed on metrics of cost, time to delivery, and quality. In this view, Windows itself became just another part of the supply chain, albeit the second most expensive part, usually far behind Intel. That’s why PCs tended to converge on similar designs. Some argued that the ODM process drove much of that with a small set of vendors looking to keep costs low and sourcing from a small set of suppliers to meet similar needs from US sales and marketing arms. Even though there were a dozen global PC makers, the increasing level of componentization and the ODM model caused a convergence, first with desktops and now we were seeing the same happen with laptops.

    Effectively, ODMs were driving a level of commoditization. On the one hand, this was great for better device driver support and a more consistent Windows experience, as investments the ecosystem made were leveraged across independent PC makers. On the other hand, this led to margin compression for PC makers which put even more pressure on ODMs and Microsoft. Further, given all PC makers faced similar constraints, PCs tended to converge to an average product, rather than an innovative one. There were a few outliers such as Sony who continued to drive design wins, but with ever-decreasing volume (Sony sold off their PC business in 2014.)

    Roanne and Bernardo were treating the ODMs with the same level of attention as OEMs, something that had not been done before. The PC business was extremely healthy in terms of unit volumes, but the OEMs were all struggling to maintain profit margins and were looking to ODMs for leverage. There was a great deal of envy of Apple and its sleek new MacBook laptops made from machined aluminum, but, more importantly, the margins Apple earned from those PCs were impressive. The OEMs were pressuring the ODMs to deliver the same build quality with room for ample margins and a lower price than Apple, which was believed to be charging premium prices at much lower volume. In manufacturing, volume is everything so it is reasonable to assume that much higher volume could make lower prices possible, but only if the volume was for a small number of different models which the OEMs were not committed to. The ODMs thought this was impossible and continued to push their ability to deliver higher-margin and higher-priced premium laptops.

    Early in my time at Windows, I went to Asia to visit with some of the ODMs to see firsthand their perspective on the PC and the relationships across the ecosystem. Visiting with the ODMs brought these tension front and center and proved informative.

    The ODMs that served multiple OEMs were quite stringent about secrecy and leaks across their own companies. Each OEM had distinct and secured facilities, and the ODM management structure was such that information was not shared across facilities. Visiting a single ODM meant seeing buildings for any one of the major global manufacturers all identical on the outside, but each entrance was guarded as though the building was owned and operated by the specific OEM. Access was closely guarded and employees of the ODM never crossed into different facilities. I recall once having an Apple building pointed to in a manner that reminded me of driving by a building on the Washington, DC Beltway when everyone knows it is a spy building, but just remains quiet and doesn’t say a word. An ODM even acknowledging they served Apple would jeopardize their business even though it was an open secret.

    The ODMs themselves were struggling with margins, even though they benefitted from a highly favorable labor cost structure in Asia. Several were responsible for manufacturing Apple devices, and while they would certainly never, ever divulge any details about that process, we knew that much of the math they would show us, bleak as it was, was informed by what they were doing for Apple and other OEMs.

    One of the larger ODMs that I also understood made Macintosh laptops—and was run by a founder and CEO who had grown the company from the 1980s—proved to be an adventure to visit.

    I arrived for my tour and put on a bunny suit and booties, deposited any electronics and possessions into a safe, and entered through a metal detector. There were cameras everywhere. There were acres of stations on the floor of a massive assembly line, each one responsible for a step in the manufacturing process. Pallets of hard drives, motherboards, screens, and cases arrived in one end, and progressed through the enormous assembly line stopping at each station. One added the hard drive, while one was attached the screen. Another verified that everything powered up. The final assembly step was where they attached the Genuine Windows hologram that proved that the Windows software was not pirated and was purchased legally. The ODMs always made a point of showing me that step knowing their role in anti-piracy was something we were always on the lookout for. Then the machines were powered up and burned in for a few hours to make sure everything worked.

    After seeing the line, we ventured to the top-floor of the building to the CEO’s private office, which was the entire floor. On this floor was a beautiful private art collection of ancient Chinese calligraphy and watercolors. After a ritual of tea and tour of the gallery, the discussions of business began.

    Instead of hearing about all the requirements from Windows, the CEO raised many issues about our mutual customers. We discussed the constant squeeze for margin, the pressure to compete with Apple without paying Apple prices, and what I found most interesting: the desire for flexibility, which precluded everything else. The OEMs, it turned out, were born out of an era during which desktop PCs were made from a number of key peripherals, such as a hard drive, graphics card, memory, and other input/output cards. Each of these could be chosen and configured at time of purchase. This allowed for two crucial elements of the business. First, the customer had a build-to-order mindset, which was appealing and a huge part of the success of Dell. Second, the OEM could bid commodity suppliers of these components against each other and routinely swap out cheaper parts while maintaining the price for customers. This just-in-time manufacturing and flexible supply chain were all the rage in business schools.

    The problem was that this method did not really work for laptops and especially did not work for competing with Apple. Apple designed every aspect of a laptop and chose all the components and points of consumer choice up front. This gave Apple laptops the huge advantage of being smaller and lighter while also not having to account for a variety of components placing different requirements on software, cooling, or battery life. This integrated engineering was almost the exact opposite of how the Windows PC makers were designing laptops.

    During one visit in 2008, after the MacBook Air had recently been released, it was all the ODMs could talk about. The only PCs that came close were made in Japan for the Japan market or achieved little critical mass in the US except among tech elites, such as the Sony VAIO PCG-505 or the Fujitsu LifeBook that I used. The Air’s relatively low(-ish) price point would eventually lead to an Intel marketing initiative called Ultrabook™ but as future sections will describe, it would be years before the PC ecosystem could respond to the Air.

    From an ODM perspective, the requirements for the supply chain were driven by non-engineers or marketing from the OEMs back at HQ and seemed disconnected from the realities of manufacturing. They felt they had the capability to build much sleeker PCs than they were being asked. Always implied but never stated was the fact that some of them built devices for Apple.

    Meeting after meeting, I heard stories of ODMs who knew how to build leading and competitive laptops but the US OEMs, even when shown production-ready prototypes, would not add them to their product lines. They wanted cheaper and more flexible. Or at least that was the frustration the ODMs expressed.

    From a software perspective, I began to understand why PC laptops were like they were. For example, they were relatively larger than Apple laptops to enable component swapping as though they were desktops. Nearly every review of a Windows laptop bemoaned the quality of the trackpads—both the hardware and software—and here again the lack of focus on end-to-end, including a lack of unified software support from Windows and a requirement for multi-vendor support, made delivering a great customer experience nearly impossible. It was also the case for cellular modems, integrated cameras, Bluetooth, and more.

    The desire for lower costs would preclude advanced engineering and innovation in cooling. This led to most Windows laptops to have relatively generic fans and overly generous case dimensions to guarantee airflow and cooling for a flexibility in componentry. The plastic cases filled with holes and grills were just the downstream effect of these upstream choices. So was the fan noise and hot wind blowing out the side of a Windows laptop.

    Review after review said Windows laptops didn’t compare to the leading laptops from Apple, and yet I could see the potential to build them if only one of the OEMs would buy them. They simply didn’t see the business case. Their view was that the PC was an extremely price-sensitive market, and their margins were razor thin. They were right. Still, this did not satisfactorily explain Apple nor the inability to at least offer a well-made and competitive Windows PC. Though simply offering it would prove to be futile because of the price and limited distribution such a low-volume PC would necessarily command. We were caught in a bad feedback loop.

    A larger initiative Roanne and team would take on was the infamous “crapware” issue, the phrase coined by Walt Mossberg years earlier. Tech enthusiasts and also reviewers tend to view crapware through that namesake lens as software that just makes the PC worse. OEMs had a decidedly different view and to be honest one that as I learned about it, I worked hard to be more sympathetic to. To the OEMs, additional software was a means of differentiation and a way to make their own hardware shine. We tended to think of crapware as trial versions of random programs or antivirus software, but to the OEMs this was a carefully curated set of products they devoted enormous resources to offering. While some were trials and services designed to have revenue upside, others were developed in-house and were there because of unique hardware needs. The grandest example came with Lenovo ThinkPads called ThinkVantage Technologies or TVT. Under this umbrella the full enterprise management and control of the PC hardware was enabled by Lenovo. While I might have an opinion on quality or utility, there was no doubt they were putting a good deal of effort into this work and most of the features were not part of Windows.

    Previously I described the OEM relationship as adversarial. That might have been an understatement. The tension with OEMs was rooted in a desire to customize Windows so that it was unique for each OEM or each product line from an OEM. Microsoft’s view was these customizations usually involved “crapware” and that Windows needed to remain consistent for every user regardless of what kind of PC it ran on. It is easy to see this is unsolvable when framed this way. It was amazing considering how completely and utterly dependent Microsoft was on OEMs and OEMs were on Microsoft. This is a case where new people with a new set of eyes had a huge opportunity to reset the relationship.

    What I just described was a few months of my own journey to understanding why PCs were like they were, and now I felt I understood. Because I was new, and we had a new team led by Mike, we were optimistic that we could improve the situation. There was no reason to doubt that we could. We believed we understood the issues, levers, and players. It seemed entirely doable. We just had to build trust.

    The primary interaction with OEMs were “asks” followed by “request denied” which then led to a series of escalations and some compromise that made neither happy. This would be repeated dozens of times for each OEM for many of the same issues and some specific to an OEM. As with any dynamic where requests are denied, the result was an ever-increasing number of asks and within those an over-ask in the hopes of reaching some midpoint. If an OEM wanted to add four items to the Start Menu, then the ask might be to “customize every entry on the Start Menu” and a reply might come back as “none” or “no time to implement that” and then eventually some compromise. It was painful and non-productive.

    There are two schools of thought on these issues. One is that it is obvious that Windows is a product sold by Microsoft and the OEMs should just pass it along as Microsoft intended it to be—relegating the OEM to a wholesale distributor. In fact, the contract to buy Windows from Microsoft specifically states that the product is sold a certain way that should remain. Given the investment OEMs made in building a PC, they did not see the situation that way at all.

    The other is that it is equally obvious that the OEMs are paying Microsoft a huge amount of money and they own the customer relationship, including support, so they should be able to modify the product on behalf of their end-user customers. As it would turn out, the first view was on pretty firm ground, at least prior to the 1990s antitrust case. After settling that case, the conclusion was that the right to modify the product as an end-user would pass through to OEMs and Microsoft had limits in what it could require. (If people reading this are thinking about how Android works with phone makers today, you have stumbled upon the exact same issue.)

    Perhaps using the word adversary as I have previously used was too blunt. In fact, the relationships were often far more complex and nuanced, fraught with combinations of aligned and mis-aligned incentives. It was entirely the case that the OEMs were our customers, but that was confusing relative to end-users who were certain they were buying a Microsoft product while also a PC maker’s product, and often the Microsoft brand was paramount in the eyes of consumers. The OEMs were also Microsoft partners in product development—we spent enormous sums of money, time, and resources to co-develop technologies and the whole product—yet this partnership felt somewhat one-way to both sides. OEMs were often viewed as competitors as they ventured into Linux desktops and servers, while at the same time they viewed Microsoft as offering one of several choices they had for operating systems. We certainly viewed OEMs as the source our shortcomings relative to Apple hardware, yet the OEMs viewed us as not delivering on software to compete with Apple. Even as distribution partners, depending on who was asked, the OEMs were distributors of Windows or Windows was a distribution tool for the PC itself.

    These dysfunctions or as most schooled in how the technology stack evolved referred to them as “natural tensions” had been there for years. There’s little doubt that the antitrust settlement essentially formalized or even froze the relationship, making progress on any part a challenge. There was no doubt a looming threat of further scrutiny as the ongoing settlement-mandated oversight body remained. I met regularly with the “Technical Committee” and heard immediately of anything that might be concerning. Every single issue was resolved, and frankly most trivially so, but the path to escalate was always there.

    At the conclusion of the antitrust case, the Final Judgement or Consent Decree (CD) formalized a number of aspects of the relationship, some surprisingly made it more difficult to produce good computers and others simply reduced the flexibility the collective ecosystem had to introduce more competitive or profitable products. While the CD was generally believed to focus on the distribution of web browsers and Java (or in other markets, media players and messaging applications) it also dictated many of the terms of the business licensing relationship. Many of these related to the terms and conditions of how Windows could be configured by OEMs, essentially extending the OEM rights beyond simply installing competitive browsers to just about any software deemed “middleware” as Java was viewed.

    In terms of bringing Windows PCs to market, the CD had three main sections:

    * Non-retaliation. The first element was that Microsoft could not retaliate against OEMs for shipping software (or middleware) that competes with Microsoft, including shipping computers running competitive operating systems. This codified the right of OEMs to ship Linux and even dual-boot with Windows for example. My how the world has changed, now that is a feature from Microsoft!

    * Uniform license terms. Basically, this term meant all OEMs were to be offered the same license and the same terms. Historically, as most all businesses see with high-volume customers, there were all sorts of discounts and marketing programs that encouraged certain behaviors or discouraged others. Now these offers had to be the same for all OEMs (for the top 10 there could be one set and then another for everyone else). In many ways, the largest OEMs got what they wanted but not since this put the best customer on the same playing field as the tenth best. This did not preclude the ongoing Windows Logo program described below.

    * OEM rights. The CD specifically permitted a series of actions the OEMs could take with regard to Windows including “Installing, and displaying icons, shortcuts, or menu entries for, any Non-Microsoft Middleware or any product or service” and “Launching automatically, at the conclusion of the initial boot sequence or subsequent boot sequences, or upon connections to or disconnections from the Internet, any Non-Microsoft Middleware.” These terms, and others, got to the heart of many of the tensions with OEMs over “crapware” or software that many consumers, especially techies, complained about. Now, however, it was a right OEMs had and Microsoft could do little to enforce that.

    There were many other terms of course. Since we were “stuck” with each other, with Windows 7 we tried a nearly complete reset of the OEM relationship.

    The bulk of this work was captured in the Windows logo program—these were the instructions from Microsoft for how OEMs installed and configured Windows during manufacturing described in the OEM Preinstallation Kit (OPK) authored by Microsoft with ample compliance scrutiny. Given the above, one could imagine that most anything was permitted. However, Microsoft retained the rights to implement a discount program for strictly adhering to a set of constraints to earn the “Designed for” logo. Sometimes these were discounts and other times marketing dollars for demand generation, though these are equivalent and interchangeable pricing actions by and large.

    We viewed these constraints as setting up a PC to be better for consumers. OEMs might agree, but they also saw them as hoops to jump through in order to get the discount, which their margins essentially required. These logo requirements ran the gamut from basics like signing device drivers and distributing available service packs and patches to how much to configure the Start Menu. None of them ever violated the CD as per ongoing oversight. It was these aspects of Vista that were particularly adversarial.

    We revisited all the terms and conditions in the OEM license (and logo) and worked to make the entire approach more civil. We wanted OEMs to have more points of customization while also building the product to more robustly handle those customizations—we aimed to reduce the surface area where OEMs could fracture the Windows experience. We also would spend a lot of energy making the case for why not changing things would be a good idea and in favor of lower support costs or more satisfied customers. A key change we made from the previous structure was to have the same team responsible for taking feedback and producing the logo requirements. This eliminated the organizational seam as well as a place for OEMs to see potential conflicts across Microsoft, and exploit those.

    As an example, early in the process of building Windows 7 Roanne and team would present to the OEMs examples of where they were given rights (and technical capabilities) to customize the out of box experience (OOBE) when customers first experienced a new PC. In one iteration we showed a color-coded screens where the OEM customizable region was clearly marked indicating just how much of Windows was open to OEM customization. This type of effort was a huge hit.

    The process of collaborating with the OEMs was modeled after our very early Office Advisory Council (OAC) that was run out of the same product planning group where MikeAng and Roanne were previously. Instead of just endless slide decks that spoke at the OEMs, we engaged in a participatory design process with the OEMs, a planning process. We listened instead of talked. We gathered feedback. Then we answered their specific questions.

    We began the process of working with the OEMs just as we completed the Vision for Windows 7. We did that because by then we had a real grasp of what we would deliver and when, and at least what we said was rooted in a full execution plan. This was decidedly different than past engagements where the meetings with OEMs reflected the historic Windows development process, which meant that much of what was said at any given time could change in both features and timing. For the OEMs with very tight manufacturing schedules and thin margins, information that was so unreliable was extremely costly to them. At every interaction they had to take the information and decide to act on it, allocate resources, prioritize a project, and more, taking on the risk that the effort would be wasted. Or they could choose to delay acting on some information, only to find out it was critical to start work to provide feedback that might be significant. It was a mess.

    What Microsoft had not really internalized was just how much churn and the level of real dollars it cost the OEMs with this traditional interaction. In fact, most involved on the Microsoft side felt, as I would learn, quite good about the openness and information being shared. I learned this both from the OEM sales team and directly from the CEOs of OEMs who were starting to get rather uncomfortable with the lack of information. They began to get nervous over my lack of communication (literally me and not the OEM team) as I had not provided details of the next release even after being on the job for months. I just didn’t know the answers yet. Our desire to be calm and rational created a gap in communication that worried people. It was not normal. We had not set expectations and even if we had it wasn’t clear they would have believed us.

    MikeAng characterized the old interaction as “telling the OEMs what we were thinking” when we needed to be “telling OEMs what we were doing.” By characterizing what we were going to improve this way we were able to avoid massive amounts of wasted effort and negative feelings. We also believed we would reduce the number of budget-like games when it came to requests. No matter how much Microsoft might caveat a presentation as “early thoughts” or “brainstorming,” customers looking to make their own plans under a deadline will hear thoughts as varying degrees of plans. Only later when those plans fail to materialize does the gap between expectations and reality widen—the true origin of dissatisfaction. At that point the ability to remind a customer of whatever disclaimer was used is of little value. The customer was disappointed. This cycle repeated multiple times over the 5-year Vista product cycle, and many before that over the years.

    With the Windows 7 vision in place, we began a series of OEM forums and meeting at least every month with each OEM. Each forum (an in-person workshop style meeting) would focus on different parts of the product vision or details about bringing Windows 7 to market. Roanne and team dutifully documented the feedback and interactions. We actively solicited feedback on priorities and reactions to the product overall. Then we summarized that work and sent out summary of engagement memos to the OEMs. In effect, the relationship was far more systematic, and the information provided was far more actionable. From the very start we communicated a ship date for the product and milestones, and a great deal of information about the development process. We reinforced our attention to the schedule with updates and information along the journey.

    The entire process was run in parallel for the key hardware makers: storage, display, networking, and so on. We also repeated it for the ODMs. The team did an amazing job in a very short time, for the first time.

    Over the coming months and really years, the reestablishing of trust and the effort to become a much more reliable, predictable, and trustworthy partner would in many ways not only change this dynamic and experience for OEM customers, but significantly improve the outcome for PC buyers. At the same time, OEMs were able to see improvements in satisfaction and potential avenues to improve the business. The PC would see many major architectural changes over the course of building Windows 7—the introduction of ink and touch panels, broad use of security chips and fingerprint login, addition of sensors, expansion of Bluetooth, transition to HDMI and multiple monitors, high-resolution panel displays, ever-increasing use of solid-state disks, and the transition to 64-bits. The Ecosystem team would be the conduit and moderating force between OEMs, their engineering teams, the engineering teams on Windows, and even Microsoft’s own OEM sales and support team.

    The Windows business faced a lack of trust from every business partner. The Windows team needed to move from chaotic process that promised too much and delivered inconsistently and late. Instead, we aspired to make bold but reliable promises—my guiding principle, the mantra of promise and deliver. While we talked big, the ball was in our court. We put in place a smoother and more productive engagement with OEMs. Throughout the course of developing Windows 7 and beyond, we consistently measured the success of the engagement with qualitative and quantitative surveys. Through that we could track the ongoing improvement in the relationships.

    Promise and deliver.

    Just as we worked to gain renewed focus with OEMs, Apple chose to declare war with Microsoft, and the PC, with clever and painfully true primetime television commercials. While the commercials began before Vista, the release and subsequent reception of Vista were exactly the material the writers needed for the campaign to take off.

    On to 090. I’m A Mac



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hardcoresoftware.learningbyshipping.com