Sunday, January 27, 2013

Apple Falling Apart Without Jobs

I wrote this a while back on Steve Jobs, and now we see how Apple has lost its direction in the same way they did during the Pepsi Cola years. In my own Entrepeneurship I long to find the steve jobs to balance my inner wozniac. The business and the sale side of the game has never been my strong side. I respect those who do it well. Jobs may not have been a technical visionary but he was very detail and design oriented. The next computer just looked hot. It was the first PC that did. Well it cost an arm and a leg so it wasn't really approachable, but so does a Ferrari.

Why did Google enlist Sebastian Thrun? Ray Kurzweil? They are trying to buy an ounce of Steve Jobs Magic. Vision is valuable.  Google got stuck with a car that drives itself, but is well stocked in vision. And that's priceless. And with Sergey (we were classmates at Stanford in the PhD CS program at the same time he got google I got worked to death and made a lot of promises. The joys of being a woman) at the helm things are starting to make more sense rather than "gee were playin with billions cause we can"

Dokko.co just patented three things related to google glasses internet enhanced reality. It's the new big thing. It's coming. Sooner than you think. But my feeling is... if you can't think up a .com name for your company you suck. 

Hey Google, I read Ray's Kurzweil's book on neural networks and genetic programming - all 20 year old stuff (great synthesizer design notwithstanding I own four of his keyboards and a maxed out k2000rs). Feel free to hire me to lead your augmented reality neural-cybernetics division. Trust me it will be way cool.

------

In the year of Steve’s passing, I want to talk about the conversion of the nerd. I was there. I programmed both the Apple II and the CompuColor. I decked them side by side in their battle. I know what happened.
Steve Jobs was not an intellectual or a geek or a nerd. Steve was a hippie turned messiah. He was technical and knew about circuits and programming but he was never a master of them. He loved circuits and the power they provided and the digital revolution. Jobs and Woz met in high school and later When Steve Wozniak would visit home from Michigan Jobs was always there to learn. “We both loved electronics and the way we used to hook up digital chips,” Wozniak said. “Very few people, especially back then had any idea what chips were, how they worked and what they could do. I had designed many computers so I was way ahead of him in electronics and computer design, but we still had common interests. We both had pretty much sort of an independent attitude about things in the world. …”
After high school, Jobs enrolled at Reed College in Portland, Oregon. Lacking direction, he dropped out of college after six months. In 1974, Jobs took a position as a video game designer with Atari. Several months later he left Atari to find spiritual enlightenment in India, traveling the continent and experimenting with psychedelic drugs. In 1976, when Jobs was just 21, he and Wozniak started Apple Computers. The duo started in the Jobs family garage, and funded their entrepreneurial venture after Jobs sold his Volkswagen bus and Wozniak sold his beloved scientific calculator. They had $1,350, enough for parts and ramen noodles and not much else (were there ramen noodles back then?)
Their first computer – the Apple I – was a colossal failure. It barely sold. They had poured their lifeblood their sweat and all their time and finances into it. And it failed. It was ignored. It was nothing. Compare this to Larry Ellison who took IBMs SQL language, built a database around it, and it sold sold SOLD SOLD. Endless steady growth. But for Jobs it was different, because it started with failure.
The single most important question in the history of silicon valley is what happened between Apple I and Apple II. And more importantly why did they keep going. Well when you work in a garage and your food is ramen noodles and you have no salaries, the few sales they did manage were able to fund continued development. And with it they added the one key ingredient that was missing – Color. The Apple II was the worlds second color computer for hobbyists. But at the same time, there was the CompuColor. And the Compucolor came first. And it was better. In fact, if you messed with it with bad code it could literally fry the circuits of the custom Intel monitor that struggled to keep up with its high resolution graphics. The square blocks of the Apple II was a dinosaur by comparison. So how did the lesser machine win the fight?
I literally sat in a room with an Apple II and a CompuColor II duking it out for nerd mind share in the local high school. The CompuColor was the better machine. It was the one nerds fought for time for when the AppleII sat unused. It has the coolest keyboard this side of Mythros and probably EVER in computer legend. It has better games and much better graphics. It was more interesting to program. The Apple II had more games. Most of which were programmed in Basic. And the code was Hideous. Direct PEEK and POKE statements into video ram one after another by the hundreds. If you wanted the bouncing ball to change from green to flashing purple you updated the hex code. The older students hated the dweebier and dumber AppleII users so we would always hack the games and switch out sections of PEEK and POKE code. Suddenly Mario had only one leg to hop on, and his dinosaur now looked like a frisbee. The non programmer would come in to play a game and go “What the HELL!” We would feign oblivion and hack away over and over on the real computer – the CompuColor – the joy of 11th grade hakerdom. A real machine for real engineers. After all this one might ask not why did Jobs succeed but why did the Compucolor with all that going for it, lose? As I’m about to show you, an important first lesson of success is that to win, you simply have to show up and be in the game. The second lesson is to do things right.
The King of Color – the CompuColor II

The young upstart, the Apple II


Apparently, you could not format the 5.25″ disks yourself, surely because Intecolor wanted to make money by selling these preformated disks… But many users ended up by writing their own formating programs.
The system was very vulnerable to certain hardware tinkering. Tampering with the addresses that accessed the hardware registers could wipe out all the RAM (it did something fatal to the refresh logic). It used an Intel CRT controller for screen processing. Altering the number of scanlines to too high a value could kill the CRT.
The ROM contained a ripped-off version of Microsoft BASIC and a simplistic file system. Microsoft found out about them, and forced ISC to become a Microsoft distributor. They also collected royalties on all machines sold up to that time.
The disk drive was originally designed to use an 8-track tape cartridge for storage (yes, you read that right!). When that proved to unreliable, they switched to a 5.25 inch disk drive. They didn’t change the file system, which still thought it was a tape drive. When you deleted a file, it re-packed all remaining files back to the front of the disk. Used the 8K of screen RAM for a buffer to do it, which led to some psychedelic I/O. ” – OldComputers.com
In the end it was the government cracking down on them for FCC radio controls, and Microsoft cracking down on them for royalties that slew Compucolor. They had gotten better faster by taking shortcuts. But they were screwing the developers by charging them for disks. Without disks in hand developers didn’t feel like writing software for it. Once a monitor fried out, the high schools had no funds to replace them and the CompuColor went to the trash, replaced by the more reliable Apple IIe. Jobs and Wozniak had done it all themselves out of love. And that drive and care about the design made sure it was done right without shortcuts even if they got less far. Were there problems? Of course. But five years later the CompuColor would be gone, and the Macintosh would be released catapulting Apple to the Fortune 500. And the greedy MBAs would think they no longer needed Jobs and cast him out. While the fuel of his passion sustained Apple within five years it would falter. Jobs would go on to invent much of the core of object oriented programming and operating systems while at Next when dopey computer science professors were stuck in their academic world of Pascal. From Next he birthed Pixar. But the real birth was the widespread use of Objective C, Object oriented programming, C++, and finally Java and C#.
Objective-C was created by Brad Cox and Tom Love in the early 1980s. Cox was intrigued by problems of true re-usability in software design and programming. One of the first object oriented languages was Smalltalk but it wasn’t backwards compatible with all the C programming out there it was really its own world. Cox wrote a pre-processor for C to add some of the capabilities of Smalltalk which he called “OOPC” for Object-Oriented Pre-Compiler. Cox showed that making interchangeable software components really needed only a few practical changes to existing tools – They needed to support objects in a flexible manner and allow for the code to be bundled into a single cross-platform format. Jobs picked up on this and selected Objective-C from StepStone but then he did it one better. He extended the GCC compiler to support Objective-C and developed the AppKit and Foundation Kit libraries to make development faster and more powerful.
The Next Computer – What Steve Jobs Thought Computers Should Look Like

The Sun Sparkstation i (4/60) affectionately referred to by programmers as “the pizza box”

In an odd way, the real revolution that Jobs inspired would be unknown to the Apple people because it wasn’t the hardware but the programming language of the computer that Jobs had the biggest impact in. While Dr. Stroustrup would have Given us C++ anyways, without a Next Computer and it’s brilliant hardware – that optical disk that could fit all of Shakespeare! – that was enough geek gadget gleam to get all the nerds excited to move into object oriented programming. Twice he got the nerds to shift their seats and turn to the machine he designed. The first time it was by surviving long enough to watch the top dog fail, but the second time he did it by offering us sexy. It cost $6,500 when released in 1990 about $20,000 in today’s money! Far beyond what nerds could afford.
Apple had released the LISA computer in 1983 (Steve Jobs was kicked off the project in 1982 so he is only partially to blame) for a staggering $10,000 entry price. It didn’t sell. I recall walking into a computer store which were then everywhere and drooling over the massive 2 megabytes of RAM. How could anyone ever write a program that could use so much memory. The LISA was a dream computer out of the reach of everyone. Yet in just a few years Here was Jobs again releasing a computer for nearly the same price. What could he possibly have been thinking. It was a price which meant only universities could afford them. What ended up happening is that every computer science department in the country wanted one and all the students walked by and drooled. What on earth was this thing. At the time when Sun was dishing out the pizza box – stale crust to be sure – and DEC had us stringing input parameters, Jobs had us in revolution. Eventually the Next physical platform would fade and the software would continue on.
The LISA, a failed first go…

Today all core apple software and the programming for the iPhone is in ObjectiveC which goes back to Next. What then I ask was Jobs real legacy? It was never about the hardware.
Finally, a young man at CERN named Tim Berners-Lee sat down at this computer in 1991 to create the first web browser and web server and with it… the internet you are reading this article on today. Do you recognize it?

The Job’s story is not one of being the smartest or making the best product. It was all about perseverance and love of what they were doing. And that translated into doing things right. And caring about the product. By the time the Apple IIe was out, it was a solid stable platform and the developers jumped on. Once the developers were out there, he switched roles. He no longer soldered and built the machines. Instead he inspired the next generation to see them with the same awe and glory that he did. He was the messiah of the digital age.


dennis ritchie

I wrote my first C program in 1986. In 1988 I was an instructor in C. I taught advanced computer graphics and helped the artificial intelligence lab (the Winnifred Asprey Lab) AI students get their hands around LISP. But mostly I helped people with C. We had been BLUDGEONED by arrogant professors that PASCAL was the way. I hated PASCAL. I hated PASCAL so much I pretended I liked LISP. Then i learned PROLOG and was in desperate need of salvation when I learned C and suddenly for the first time computers worked. I had to write some C++ code at my last job back in 2011 so thats 25 years of C programming I’ve done in one way or another. A quarter century of work all began by a man largely forgotten and ignored by the industry because he passed at the same time that Jobs did. But he did so much more. Our lives are a million times more changed by Dennis Ritchie than by Jobs you just don’t know it. His work isn’t sexy you can’t put his work in your pocket and you can’t explain it to people who watch the Kardashians, but his work changed our world.

It’s really hard to explain the world of the old days of computing to those who weren’t there to see it and it’s as if we all had computers growing out of our butts. We sat at machines that when you typed into the computer typed onto paper. When you finished you had that long roll of paper that showed everything you did. Why did we type on paper? Well because that’s what typewriters did. Later they had TV screen things that had no graphics at all just green on black. There was this heavy thing called a Tektronix display that cost like twenty grand and it did graphics in bright green. Few of us knew how to use it.

In the end all those cell phones all their communication happens in C and it happens on operating systems written in C and they talk to your printer on a driver written in C and that is powered by a power station that uses control software that is written in C. And when we compile Java we get ByteCode which then is interpreted by a program… written in C.

Thank you Dennis you will be missed.

Being a technology dork

I was a real dork telling a cto that I had been teaching C before they were born. I didn't really mean that I was just searching for the idea, that yah, I've done this for a while. But one advantage of being an older engineer if you've seen such an evolution. The advantage of being younger is things are a lot more stable than they used to be. I don't think people who are new to coding really get that no we aren't so excited to learn the 400th new way to write UIs or the all new language that's comin our way. But we'll grumble and learn and keep learning. Separate the wheat from the chaff.

I drew this little image map of the languages I've worked on.... shows how things spread. It's been a long journey.


Sunday, January 13, 2013

Six Sigma and 24x7 Reliability in Large Scale Web Design


Six Sigma and 24x7 Reliability in Large Scale Web Design
 
   by Gianna Giavelli

Many times practitioners of Six Sigma or Gemba Kaizen will push on process and review as the way to get to near perfect system uptime. But defect prevention is a complicated thing and when it involves systems engineering, the process of hitting a defect and then performing a six sigma resolution process may be too slow. By too slow I mean it might iterate and slightly different errors keep occuring resulting in terrible uptime scores before all the issues are ever figured out, if ever.

So what to do? What is an appropriate technology strategy when the boss says we have 99.999 uptime contracts? The answer isn't going to please people. The real solution is a combination of a six sigma defect policy but only alongside a technology review policy.

Forming a technology policy:
 One of the secrets to uptime is having clear and optimized code with appropriately selected technologies, and knowing those technologies limits. After I led the re-architecting at Blueturn the analytics platform went from complex, slow, and crashing (rarely) to stable and performing. You need someone who can really think through edge and black swan conditions and anticipate as much as possible and defend from it. 

Some Examples:
   Perform a database audit to see how it will survive a hypergrowth situation. Solution would be to consider a distributed multi node technology like Cassandra or disk level solutions like striping + separation of index and data, and confirming drive space. I once got hit when numbers of users when from 400 to over a million very quickly. And if it isn't the db layer itself, it might be the caching technology or app server that can't keep up. Test this case specifically!

  Front Web Services with message queues. Web services are thread bound for performance, message queues are CPU bound but with greater recover-ability. Even for a process which is a synchronous request, using a call to a message queue ensures that the work item is not lost and will eventually be processed. 

   Clear contracts between layers and performance and load test each layer. This ensures that each path can be separately qualified rather than one path with many many routes of code of possibility, which is inherently easier to hit an edge case that simply hadn't turned up before. 


   Review edge cases and protection. Commonly this can be things like testing null value and negative value cases, large data cases.  This will be one of the most painful and tedious things to put into automated testing. Clear layer type architectures using fascade patterns if necessary will make this level of isolation possible. Beware of missing a technology or code section when reverting to layer testing. Also have "try to break it" crazy data entry sessions to see if people randomly can come up with something which is a break. You can do this directly against a service using a tool like SoapUI if need be. 


  Get Professionally Load tested and verify the usage pattern: You need load testing which is done from multiple site and multiple computers. Usually doing it in house is not enough if you are serious about 24x7 9999 uptime. Mirroring the hardware is also key.  You need to invest in fully identical mirror setups for load testing of this nature or it might be useless. 


   Have extensive logging.  And point to Method places in the log don't just save the error. Number the log entries with each one getting a unique code so very quickly you can search for it, don't count on a stack trace. This is key for the 3am phone call. If you don't have enough logging to investigate a edge case that never should have happened, to  minimize the code that needs to be reviewed and checked, then you need more. 


Beware of anything with pointers: Sorry, but the big advantage of modern languages is not having pointers which we glady give up for the tiny performance hit.

  Specify development, system and regression test, production and production mirror environments RIGOROUSLY and have a strict process for change or deviation. Many times a new rev of a 3rd party tool will inject a but or unexpected behaviour so this has to be protected for throughout the process.

  Set up a review process for every failure and make sure that there is a serious attitude with all necessary parties involved. It should include:

  •    Results of the Investigation
  •    Proposal to prevent this AND SIMILAR CLASSES OF PROBLEMS
  •    REVIEW of PRIOR defects and if the company is being successful in preventing


One key thing for this kind of review session is that a lackadaisical attitude "the bug is fixed" can lead to continued problems. Why did the bug happen? What about the nature or approach to the code allowed it to happen? What can we look for in code reviews. If it's a system or 3rd party related issue then review if there are options or how you can work with the vendor to ensure not just a bug fix, but that the whole class of problem is reviewed.

I hope this helps you begin to think of all the issues with seeking 24x7 9999 uptimes with modern technologies. I did not go into the process side because already there is much written on the six sigma and gemba kaizen methodologies. In the end, clean well organized code and architecture is MORE important to six sigma success with technology than review process and formal process definition. Stressing code elegance and software as an ART not a commodity is key for management, which also means you cannot treat your engineers like commodities with a basket of skills either.  The art of good code is taught from good senior engineers on down and never in school and never in most development companies. Keep that in mind when chosing your senior team.







x