Wednesday, June 26, 2013

Lady Swineburn's Roundet d'etet

that old bum had he napoleans head he surely woulda sung
effegies for fleacers and pre raphaelites
a long stream of sneezes
among the uptights
anactoria's embrace
although lewd made haste
as sappho caught her taste
and history was undone

a screaching parlance a mischevious dance
Poe's passing said he launched in greatness
the last of the enchanced
what meat begins with the letter G?
Ask not old man swineburn
his lady, or the tree

Wednesday, June 19, 2013

Google Goofs With Ray Kurzweil

Ray Kurzweil wrote a book on "How to Create a Mind" and somehow Sergey Brin of Google was so dumb in these areas as to think that Kurzweil had said something significant. He hadn't. All Kurzweil did was rehash 1980s research and ideas by others and add a bit of modern brain imaging. Kurzweil is no more the messenger of how the brain works than Gould was for evolution. Both were huxsters and nothing more.

 Anyone who knows the history of A.I. will recognize that the basic theory (and even the diagrams that are used to illustrate it) is very much in the spirit of a textbook model of vision that was introduced in 1980, known as neocognitron.

Now that isn't to say that Kurzweil didn't start a music revolution with his synthesizer, and his blind reading machine, but that was 1960s technology and theres a big what have you done for me lately.

Sergey specifically hired Kurz to work on natural language processing, an area Ray knows little to nothing about.

I'm reminded of Hinton giving a ted talk about neural network recognition of individual letters. And as I watched I thought HEY this looks familiar. Nothing had changed since the 1980s. These old FRUMPS are not the leaders. Who are? Well people like me who are actually there in the details pushing on the real leading edge and testing mecanisms of neural design. Edelman's younger protege Olaf Sporns -Networks of the Brain

Even more disappointing is the fact that Kurzweil never bothers to do what any scientist, especially one trained in computer science, would immediately want to do, which is to build a computer model that instantiated his theory, and then compare the predictions of the model with real human behavior. Does the P.R.T.M. predict anything about human behavior that no other theory has predicted before? Does it give novel insight into any long-standing puzzles in human nature? Kurzweil never tries to find out.

Tuesday, June 18, 2013

From Windows 7 To Linux Mint - The Lenovo Dragon Breathes Peacefully

I've been struggling to do development on my big lenovo dual pentium 3.4 gHz machine using windows 7. When pushed, this box goes into full dragon mode and huge fans scream to life as the CPUs peg 100% usage. The problem is, in Windows 7, it seems like doing almost ANYTHING pegs the CPU at 100% all part of the NSA spying software they've installed no doubt.

When the system would come back from sleep it would animate through all the background images I had missed locking my system for ten minutes.

Countless times the whole thing would just freeze up. Gool ol Indian H-1B engineering no doubt. So, while Windows 7 runs great on a i7 8 core CPU it runs like crap on  a brute of a box that it should be screamingly fast on. What gives? Obviously all the developers hadn't bothered to test it with older boxes.

Well it got so bad I couldn't run a database, a browser, and eclipse at the same time. So I gave up and installed MINT. Unlike PeppermintOS which is ugly MINT is a full 1 Giga distribution so you can't just slap it on a CD. So I put in a DVD and tried to burn it BUT turns out my refurb has only DVD reading capabilities. That sucks. OK so I slapped it on my smart key and used Auto-Load partition to get it recognized as a proper drive. It ran. But unlike when you boot with it and are allowed to test it all out it just asked "do you want to install it into your windows.." and feeling terrified I was about to blow away my boot I paused.

Then I got angry. Hell yeah blow away windows. I don't care I cant take it any more!

So I clicked. And in a few minutes MINT came up. And my windows 7 boot was nicely preserved.

And then it dawned on me. The beautiful silence. Both CPUs were running at 15%. Which was as it should be. In fact I cannot ever get the CPU to run at 100% dragon breath mode on Linux.

American engineers. God bless em.

Friday, May 17, 2013

JSON Exposed Repositories & EclipseLink vs. Linq

There is a devilishly difficult pattern question in java architecture which is emerging as new deviceology throws us to the wolves of mobile devices and cloud beastments which is the ubiquitous need for json-able ORMs and AJAX feeders for data.  After three years pondering a hand hacked and painful repository pattern using hand squaged DAOs floating over Linq For Buggy Entities 3.5 and Linq for almost fixed Entities 4.0 that can now finally expose a property used in a link rather than force you to use a table trigger - I finally came to explore the tough patterns back in the java world which had gotten somewhat better but not completely.

So the first devil sausage says use the beatiful new JPA plugin which does visualization in Eclipse Juno to blast out your db tables and your entity beans then just slam some JSF and some pretty facets and UI wonders into the front end and blammo beaty and beauty for not much cost and time. To put darning on the socks EclipseLink will now also blast out all the JSON you need over rest for your AJAX calls.

But what if you chose the pure route - ditch the JSF and go straight HTML with Javascript and a boofy subslather like Knockout or Angular. Is this heresy to the java world? Moreover will you lose all the pretties that you get with something like PrimeFAces and other nice beasts?

It's a tough one. More and more JSF client workups seem dated and old. But on the other side if you've GOT the power of JSF why not use it. tough tough.

Things are changing as cloud and mobile demand robust ajax servers and tooling to spin json at a mile a minute. the question is can the developers keep up. You would think that after 100 years of this someone would set up a model centric framework that just blasts out all the rest for you. No not Roo. And not stuff that starts with clunky UML either.

In a way I kinda like the new graphical entity modeler in Eclipse. It's def the way to go. Now if they can just include the JAXB annotation properties and intelligent defaulting so you get BOTH XML, JSON and entities all keyed out right. Still it's miles ahead of Linq and .Net.

Tuesday, February 12, 2013

Books hit New Zealand. Look out world!


Did they have to translate it into New Zealandish?

New Book - Foundational Principles of Cognitive Cybernetics

I've been stumbling with this one. It's my first serious book. I want something that will change the world. It's no small task.

What is cognitive cybernetics? And what are foundational principles? It attacks what must be the basis for developing synthetic thought, real thinking intelligent machines. 

I got so mad reading Kurzweils childish pathetic attempt to understand all this that I just had to throw my gauntlet down in the ring.

In the meantime, "Poems for College Curmudgeons" will be coming out soon. A more light hearted affair. But this book, this big book sometimes I crumple into a paper ball just thinking of the hugeness and difficulty. I worry it will be a decade to complete but somehow it needs doing now. Onward.

My first computer....

It's hard to get across to newbies how far we've come. My first computer only had six letters I like to tell people. It looked like this:


OK that's a 6502 mine was a 8008. You'd punch away at that keyboard for hours entering pre-written machine codes. You'd have to calculate the jump offset of your branch or if statements. Can you imagine? Well anyways your reward for two hours of typing in low level codes was a number that looked like your expected result, something like 2718 on your calculation of the e constant. But of course it would say it in hex so it would say A9E. Sometimes it said A8E and you'd spend hours looking for a mistake. That's what computers were. Not video games. Not word processors. Not web browsers. A9E. Think about it.

Often when it said A8E you'd have to get out a soldering iron. Because the boards were early designs to support the processors and get them out for testing, but since so much was in design the boards would have jump wires where they needed a new connection. A lot would come lose or disconnect slightly. Then you'd get a program which said A9E sometimes and A8E sometimes. In the words of Tron you'd "bring out the logic probe" and track it down. Sometimes these connections gave way when it was hotter or more humid.  Things we never worry about today. Can you imagine software that worked but not on rainy days? Yep that's the way it was.

We programmed a "star wars" game for our HP41CVs, a RPN programmabe uber calculator the size of a brick.




An MIT student stole mine and I was so pissed about that.

From RSKEY.org - "A stunning calculator even by today's standards, more than 20 years after its introduction, the HP-41C remains one of the best handheld calculating devices ever conceived. The HP-41C line remained in production for over 10 years, practically defining the high-end calculator industry throughout the 1980s. Numerous machines remain in use today, no doubt due in part to versatility offered by the calculator's four expansion ports. The HP-41C has been used in the most exotic places, including the Space Shuttle or the cockpit of the Concorde."

Why was it called the CV? Well because of course it had FIVE TIMES the memory of the 441 byte HP41C. TWO THOUSAND TWO HUNDRED and THIRTY THREE bytes. I mean, can you imagine the luxury of having a thousand bytes to spare? I constantly tell my programmers that I don't want to hear them whine that a 2k memory leak isn't a big deal, that leak was bigger than my whole development platform on the HP 41!


Here is what our first video game looked like: (each line of characters was displayed one at a time, one line per second. The sequence below would take six seconds:
>                              <
  >                         <
     >                  <
         >     <
           > * <

Now what this was meant to simulate was luke firing his torpedo into the death star. You'd guess the left right position of the tube and enter a number. If you got it right you'd see:
XXXXXXXXXXX

that was a video game from the early eighties geek style. In order to see this game you'd need to load your code up with a strip reader, ten strips about four inches wide and half an inch tall. Having the extra magnetic card reader meant you were cool at a level that granted you immediate VAX access. We were geeks but the word hadn't been invented yet.

A rebuilt HP41cx with a card reader costs over 500 bucks today. That's how much people loved them.  Can you imagine writing your senior thesis on a calculator with one line of text at a time?

''The HP-41CX was the same as the HP-41CV but added the Time module (stop watch plus clock with alarms), an Extended Functions / Extended Memory module, a text editor, and some additional functions." 



Later the HP 71B came out. While the 41 only had 4k ram cartridges, whopping endlessly huge 32k RAM cartridges could be had for the 71. At the time it seemed like madness, who could possibly write a program that required 32,000 characters?

Later the ill-fated HP 42 would come out with TWO lines of text. TWO! Such luxury after starting with just four letters just a few years prior. This was amazing stuff. Where would it all end we wondered, where would it all end. My best friend fish told me that soon there would be portable calculators with eight lines of text but I told him that wasn't likely. I mean really four lines were more than enough for anyone.

The Problem With Dell

I worked at Dell helping them launch a new factory. There are several problems with Dell. Things have come to a head with moves to take the company private, one wonders if to hide falling numbers and revenues.

Dell wants to be the small business one stop. It wants to make the IBM model work. But I'm not so sure IBM makes the IBM model work. And that's on the high end fat profit side.

Dell is moving aggressively into the cloud yet it's offerings aren't so memorable.

Dell is suffering from a lack of innovation in a huge way. I felt so bad for them I wrote Michael who I had met at my tenure there and requested a coffee meeting to discuss some ideas for the company. He declined. A really great new computer and new story that differentiates them would make a big splash.

Instead their old vp from India gutted most of their development team and is now busy doing the same for General Motors after gutting AMD. At Dell Hyderabad huge Eight foot tall pictures of Michael Dell tell the workers sayings about being productive and Dell like.

In the end innovation comes from people in free happy spaces, not death by cube land. For a while they can buy companies, but like trees transplanted, the will refuse to flower in the new soil. It's time for Micheal to wake up and get some real creative juices flowing because if not there's really no way to compete with the juggernaut coming from China and Asia. The margins on the PC business have gotten so small for Dell because there is no value add. Moving into new markets may not be the answer because you can't have a staid company innovate and compete at the level that marks the best and brightest of silicon valley.

Tuesday, February 5, 2013

a great dennis ritchie song...






Are They Executives?

OK I've been a Vice president, a CEO, a CTO and similar ilk for countless companies for over a decade. I could refer to myself as "A Software Executive" but other than my resume I don't. My current title on linkedin is "freefloating curmudgeon". But more and more I see people who work in sales calling themselves executives. I think one of the bachelors on that tv show did this.

I'm sorry but "Ad Executive" does not mean "Executive" that's a misreading of the term which is more along the case of to execute not C-Level person in a company but they misuse it all the same.

Or time and time again I see people who were directors call themselves "executives" and I'm like HUH? No sorry, a director traditionally is NOT an executive.

Then there's the senior consultant at a big company who describes their title as " Senior Consultant (VP)". Uhm no! NO! you aren't a VP because if you WERE a VP your title would be "Vice President of Senior Consulting".

This weird title inflation has to stop. What also has to stop is schmucks at Amazon belittle-ING people who ARE and WERE executives as somehow being nowhere people for a director role. Give me a break. People, get YOUR TITLES RIGHT! Someone who has been a VP and CEO and CTO for ten companies... THAT PERSON IS AN EXECUTIVE AND IS MORE THAN QUALIFIED FOR A DIRECTOR ROLE! GOT IT! 

If this keeps up I'll just become an Ad Executive and sell cheesy poofs all day and act all high and mighty. Hey, at least I'll still be an executive. And make more money it seems in our broken collapsed society.

The Cloud Integration Stumbling block

One problem is that while Cloud infrastructures do great things to automate ONE application in the cloud, or the Amazon model, provide a bunch of services to work in the cloud to support you, they don't really provide cross-service functionality. A person can't register a cool application, use MySql, a workflow engine, and a messaging service, and then try to use a third party service which builds on that data. So you get a lot of silo'd apps in the cloud when the whole point is flexibility. And single point vendors are going to quickly realize that the cloud is more like the app store than a business offering, it is a marketplace and will grow organically.

But getting all the hooks in place is difficult especially where billing is involved. In most cases read only access is enough but how would  a service provider handle someone who is pummeling their service - bill the original buyer? These kinds of contractual things and costs keep the cloud stuck in silo land. Moving large chunks of data around isn't practical. Simpler non data services like workflows or notification services fit in easily with todays model but advanced services that require more of the facebook model - access to data and ability to write to it - are a ways off.

I was looking at some of the local austin companies which got funded and one was Digby. Digby does exactly what I've railed AGAINST over and over by providing only half a solution - a solution users don't want. They've got such an impressive sales force they've gotten lots of big name companies to buy their service - locationpoint - which effectively broadcasts messages or coupons to users as they pass a store. But we've seen this before in several other companies. The problem is what's the incentive for the USER to install your software? Just to see this? I don't think so it's far from enough. All of these types of companies that try to sell dry bread to users with no filling will go the same route. They'll get some sales from big companies who are clueless and want some kind of mobile strategy but users will never adopt it. And yet many go on with such huge and impressive teams that they get funding and seem like legitimate companies. I think this market of crusty bread apps will wither and die. People need strong content before they will be incentivized to download something that isn't a game and will only blast them with advertising.

Foursquare suffers from exactly this which is why they are changing from their boring check in model. Checkins are great at airports, boring in life. Its one more sandwich that skimps on the meat for its users. I mean, sorry I'm too busy with my life to "check in" just to be hip and social. and no, getting a dopey "mayor of the pizza joint" competition doesn't really float my boat enough to bother with it either.

Then you have companies like Dish.fm which when no users would eat their stale sandwich decided to just SCAN YELP for all the data and reviews. Well at least they are providing some information to users but it's hardly very accurate or relevant in a way that's novel. Why not just read yelp?

Finally I began to read up on Austin's own Capital Factory and I have to say it seems to mean and I mean this in the best way, that a lot of dingbat companies with ridiculous business plans go through the place. They get to compete and finally earn 20,000 bucks in investment and a place in the incubator but really its like a bottom feeding VC business model. They get lucky 1 out of 20 but chew up a lot of companies in the way. Really its hard to say they do any worse than regular VC but the vetting for people who can actually SELL their product should be tougher. Maybe they need to focus on that instead of how to live on ramen noodles. Austin is a great city for startups if only that its so much cheaper I can't imagine doing a startup in san francisco with the two thousand dollar a month studio apartment. These places USED to be the startup mecas because they were cheap and people had garages. The vc's moved there and refuse to leave or branch out. But the startup companies can no longer afford to be there.

So the second kind of company takes it further and builds a mobile advertising platform which all the other mobile developers are just supposed to use. But do they have a profit sharing model and enough subscribers to make it worthwhile for the mobile developer to hassle and add it? Heck no! Another stale bread only sandwich for the users. Yet again, these kinds of companies are getting funding left and right. Come on VC, always ask the basic question - for whom does the bell toll. If users aren't going to enjoy eating it, there's just no way it will be successful.

There's a whole host of great startups that have great ideas that require an ECOSYSTEM of users and rewards for them. It's so tough to build and so risky. In todays pulled back investment environment the VCs arent going for them very much either but thats the kind of company that would work not the stale bread ones. Instead everyone is forced into bootstrapping and chicken mcNuggeet SaaS designs for their startups because that's what VCs are looking for. Empty and contains silicone, and in the end might kill you, but they seem tasty for the first few bites. 

Sunday, January 27, 2013

Apple Falling Apart Without Jobs

I wrote this a while back on Steve Jobs, and now we see how Apple has lost its direction in the same way they did during the Pepsi Cola years. In my own Entrepeneurship I long to find the steve jobs to balance my inner wozniac. The business and the sale side of the game has never been my strong side. I respect those who do it well. Jobs may not have been a technical visionary but he was very detail and design oriented. The next computer just looked hot. It was the first PC that did. Well it cost an arm and a leg so it wasn't really approachable, but so does a Ferrari.

Why did Google enlist Sebastian Thrun? Ray Kurzweil? They are trying to buy an ounce of Steve Jobs Magic. Vision is valuable.  Google got stuck with a car that drives itself, but is well stocked in vision. And that's priceless. And with Sergey (we were classmates at Stanford in the PhD CS program at the same time he got google I got worked to death and made a lot of promises. The joys of being a woman) at the helm things are starting to make more sense rather than "gee were playin with billions cause we can"

Dokko.co just patented three things related to google glasses internet enhanced reality. It's the new big thing. It's coming. Sooner than you think. But my feeling is... if you can't think up a .com name for your company you suck. 

Hey Google, I read Ray's Kurzweil's book on neural networks and genetic programming - all 20 year old stuff (great synthesizer design notwithstanding I own four of his keyboards and a maxed out k2000rs). Feel free to hire me to lead your augmented reality neural-cybernetics division. Trust me it will be way cool.

------

In the year of Steve’s passing, I want to talk about the conversion of the nerd. I was there. I programmed both the Apple II and the CompuColor. I decked them side by side in their battle. I know what happened.
Steve Jobs was not an intellectual or a geek or a nerd. Steve was a hippie turned messiah. He was technical and knew about circuits and programming but he was never a master of them. He loved circuits and the power they provided and the digital revolution. Jobs and Woz met in high school and later When Steve Wozniak would visit home from Michigan Jobs was always there to learn. “We both loved electronics and the way we used to hook up digital chips,” Wozniak said. “Very few people, especially back then had any idea what chips were, how they worked and what they could do. I had designed many computers so I was way ahead of him in electronics and computer design, but we still had common interests. We both had pretty much sort of an independent attitude about things in the world. …”
After high school, Jobs enrolled at Reed College in Portland, Oregon. Lacking direction, he dropped out of college after six months. In 1974, Jobs took a position as a video game designer with Atari. Several months later he left Atari to find spiritual enlightenment in India, traveling the continent and experimenting with psychedelic drugs. In 1976, when Jobs was just 21, he and Wozniak started Apple Computers. The duo started in the Jobs family garage, and funded their entrepreneurial venture after Jobs sold his Volkswagen bus and Wozniak sold his beloved scientific calculator. They had $1,350, enough for parts and ramen noodles and not much else (were there ramen noodles back then?)
Their first computer – the Apple I – was a colossal failure. It barely sold. They had poured their lifeblood their sweat and all their time and finances into it. And it failed. It was ignored. It was nothing. Compare this to Larry Ellison who took IBMs SQL language, built a database around it, and it sold sold SOLD SOLD. Endless steady growth. But for Jobs it was different, because it started with failure.
The single most important question in the history of silicon valley is what happened between Apple I and Apple II. And more importantly why did they keep going. Well when you work in a garage and your food is ramen noodles and you have no salaries, the few sales they did manage were able to fund continued development. And with it they added the one key ingredient that was missing – Color. The Apple II was the worlds second color computer for hobbyists. But at the same time, there was the CompuColor. And the Compucolor came first. And it was better. In fact, if you messed with it with bad code it could literally fry the circuits of the custom Intel monitor that struggled to keep up with its high resolution graphics. The square blocks of the Apple II was a dinosaur by comparison. So how did the lesser machine win the fight?
I literally sat in a room with an Apple II and a CompuColor II duking it out for nerd mind share in the local high school. The CompuColor was the better machine. It was the one nerds fought for time for when the AppleII sat unused. It has the coolest keyboard this side of Mythros and probably EVER in computer legend. It has better games and much better graphics. It was more interesting to program. The Apple II had more games. Most of which were programmed in Basic. And the code was Hideous. Direct PEEK and POKE statements into video ram one after another by the hundreds. If you wanted the bouncing ball to change from green to flashing purple you updated the hex code. The older students hated the dweebier and dumber AppleII users so we would always hack the games and switch out sections of PEEK and POKE code. Suddenly Mario had only one leg to hop on, and his dinosaur now looked like a frisbee. The non programmer would come in to play a game and go “What the HELL!” We would feign oblivion and hack away over and over on the real computer – the CompuColor – the joy of 11th grade hakerdom. A real machine for real engineers. After all this one might ask not why did Jobs succeed but why did the Compucolor with all that going for it, lose? As I’m about to show you, an important first lesson of success is that to win, you simply have to show up and be in the game. The second lesson is to do things right.
The King of Color – the CompuColor II

The young upstart, the Apple II


Apparently, you could not format the 5.25″ disks yourself, surely because Intecolor wanted to make money by selling these preformated disks… But many users ended up by writing their own formating programs.
The system was very vulnerable to certain hardware tinkering. Tampering with the addresses that accessed the hardware registers could wipe out all the RAM (it did something fatal to the refresh logic). It used an Intel CRT controller for screen processing. Altering the number of scanlines to too high a value could kill the CRT.
The ROM contained a ripped-off version of Microsoft BASIC and a simplistic file system. Microsoft found out about them, and forced ISC to become a Microsoft distributor. They also collected royalties on all machines sold up to that time.
The disk drive was originally designed to use an 8-track tape cartridge for storage (yes, you read that right!). When that proved to unreliable, they switched to a 5.25 inch disk drive. They didn’t change the file system, which still thought it was a tape drive. When you deleted a file, it re-packed all remaining files back to the front of the disk. Used the 8K of screen RAM for a buffer to do it, which led to some psychedelic I/O. ” – OldComputers.com
In the end it was the government cracking down on them for FCC radio controls, and Microsoft cracking down on them for royalties that slew Compucolor. They had gotten better faster by taking shortcuts. But they were screwing the developers by charging them for disks. Without disks in hand developers didn’t feel like writing software for it. Once a monitor fried out, the high schools had no funds to replace them and the CompuColor went to the trash, replaced by the more reliable Apple IIe. Jobs and Wozniak had done it all themselves out of love. And that drive and care about the design made sure it was done right without shortcuts even if they got less far. Were there problems? Of course. But five years later the CompuColor would be gone, and the Macintosh would be released catapulting Apple to the Fortune 500. And the greedy MBAs would think they no longer needed Jobs and cast him out. While the fuel of his passion sustained Apple within five years it would falter. Jobs would go on to invent much of the core of object oriented programming and operating systems while at Next when dopey computer science professors were stuck in their academic world of Pascal. From Next he birthed Pixar. But the real birth was the widespread use of Objective C, Object oriented programming, C++, and finally Java and C#.
Objective-C was created by Brad Cox and Tom Love in the early 1980s. Cox was intrigued by problems of true re-usability in software design and programming. One of the first object oriented languages was Smalltalk but it wasn’t backwards compatible with all the C programming out there it was really its own world. Cox wrote a pre-processor for C to add some of the capabilities of Smalltalk which he called “OOPC” for Object-Oriented Pre-Compiler. Cox showed that making interchangeable software components really needed only a few practical changes to existing tools – They needed to support objects in a flexible manner and allow for the code to be bundled into a single cross-platform format. Jobs picked up on this and selected Objective-C from StepStone but then he did it one better. He extended the GCC compiler to support Objective-C and developed the AppKit and Foundation Kit libraries to make development faster and more powerful.
The Next Computer – What Steve Jobs Thought Computers Should Look Like

The Sun Sparkstation i (4/60) affectionately referred to by programmers as “the pizza box”

In an odd way, the real revolution that Jobs inspired would be unknown to the Apple people because it wasn’t the hardware but the programming language of the computer that Jobs had the biggest impact in. While Dr. Stroustrup would have Given us C++ anyways, without a Next Computer and it’s brilliant hardware – that optical disk that could fit all of Shakespeare! – that was enough geek gadget gleam to get all the nerds excited to move into object oriented programming. Twice he got the nerds to shift their seats and turn to the machine he designed. The first time it was by surviving long enough to watch the top dog fail, but the second time he did it by offering us sexy. It cost $6,500 when released in 1990 about $20,000 in today’s money! Far beyond what nerds could afford.
Apple had released the LISA computer in 1983 (Steve Jobs was kicked off the project in 1982 so he is only partially to blame) for a staggering $10,000 entry price. It didn’t sell. I recall walking into a computer store which were then everywhere and drooling over the massive 2 megabytes of RAM. How could anyone ever write a program that could use so much memory. The LISA was a dream computer out of the reach of everyone. Yet in just a few years Here was Jobs again releasing a computer for nearly the same price. What could he possibly have been thinking. It was a price which meant only universities could afford them. What ended up happening is that every computer science department in the country wanted one and all the students walked by and drooled. What on earth was this thing. At the time when Sun was dishing out the pizza box – stale crust to be sure – and DEC had us stringing input parameters, Jobs had us in revolution. Eventually the Next physical platform would fade and the software would continue on.
The LISA, a failed first go…

Today all core apple software and the programming for the iPhone is in ObjectiveC which goes back to Next. What then I ask was Jobs real legacy? It was never about the hardware.
Finally, a young man at CERN named Tim Berners-Lee sat down at this computer in 1991 to create the first web browser and web server and with it… the internet you are reading this article on today. Do you recognize it?

The Job’s story is not one of being the smartest or making the best product. It was all about perseverance and love of what they were doing. And that translated into doing things right. And caring about the product. By the time the Apple IIe was out, it was a solid stable platform and the developers jumped on. Once the developers were out there, he switched roles. He no longer soldered and built the machines. Instead he inspired the next generation to see them with the same awe and glory that he did. He was the messiah of the digital age.


dennis ritchie

I wrote my first C program in 1986. In 1988 I was an instructor in C. I taught advanced computer graphics and helped the artificial intelligence lab (the Winnifred Asprey Lab) AI students get their hands around LISP. But mostly I helped people with C. We had been BLUDGEONED by arrogant professors that PASCAL was the way. I hated PASCAL. I hated PASCAL so much I pretended I liked LISP. Then i learned PROLOG and was in desperate need of salvation when I learned C and suddenly for the first time computers worked. I had to write some C++ code at my last job back in 2011 so thats 25 years of C programming I’ve done in one way or another. A quarter century of work all began by a man largely forgotten and ignored by the industry because he passed at the same time that Jobs did. But he did so much more. Our lives are a million times more changed by Dennis Ritchie than by Jobs you just don’t know it. His work isn’t sexy you can’t put his work in your pocket and you can’t explain it to people who watch the Kardashians, but his work changed our world.

It’s really hard to explain the world of the old days of computing to those who weren’t there to see it and it’s as if we all had computers growing out of our butts. We sat at machines that when you typed into the computer typed onto paper. When you finished you had that long roll of paper that showed everything you did. Why did we type on paper? Well because that’s what typewriters did. Later they had TV screen things that had no graphics at all just green on black. There was this heavy thing called a Tektronix display that cost like twenty grand and it did graphics in bright green. Few of us knew how to use it.

In the end all those cell phones all their communication happens in C and it happens on operating systems written in C and they talk to your printer on a driver written in C and that is powered by a power station that uses control software that is written in C. And when we compile Java we get ByteCode which then is interpreted by a program… written in C.

Thank you Dennis you will be missed.

Being a technology dork

I was a real dork telling a cto that I had been teaching C before they were born. I didn't really mean that I was just searching for the idea, that yah, I've done this for a while. But one advantage of being an older engineer if you've seen such an evolution. The advantage of being younger is things are a lot more stable than they used to be. I don't think people who are new to coding really get that no we aren't so excited to learn the 400th new way to write UIs or the all new language that's comin our way. But we'll grumble and learn and keep learning. Separate the wheat from the chaff.

I drew this little image map of the languages I've worked on.... shows how things spread. It's been a long journey.


Sunday, January 13, 2013

Six Sigma and 24x7 Reliability in Large Scale Web Design


Six Sigma and 24x7 Reliability in Large Scale Web Design
 
   by Gianna Giavelli

Many times practitioners of Six Sigma or Gemba Kaizen will push on process and review as the way to get to near perfect system uptime. But defect prevention is a complicated thing and when it involves systems engineering, the process of hitting a defect and then performing a six sigma resolution process may be too slow. By too slow I mean it might iterate and slightly different errors keep occuring resulting in terrible uptime scores before all the issues are ever figured out, if ever.

So what to do? What is an appropriate technology strategy when the boss says we have 99.999 uptime contracts? The answer isn't going to please people. The real solution is a combination of a six sigma defect policy but only alongside a technology review policy.

Forming a technology policy:
 One of the secrets to uptime is having clear and optimized code with appropriately selected technologies, and knowing those technologies limits. After I led the re-architecting at Blueturn the analytics platform went from complex, slow, and crashing (rarely) to stable and performing. You need someone who can really think through edge and black swan conditions and anticipate as much as possible and defend from it. 

Some Examples:
   Perform a database audit to see how it will survive a hypergrowth situation. Solution would be to consider a distributed multi node technology like Cassandra or disk level solutions like striping + separation of index and data, and confirming drive space. I once got hit when numbers of users when from 400 to over a million very quickly. And if it isn't the db layer itself, it might be the caching technology or app server that can't keep up. Test this case specifically!

  Front Web Services with message queues. Web services are thread bound for performance, message queues are CPU bound but with greater recover-ability. Even for a process which is a synchronous request, using a call to a message queue ensures that the work item is not lost and will eventually be processed. 

   Clear contracts between layers and performance and load test each layer. This ensures that each path can be separately qualified rather than one path with many many routes of code of possibility, which is inherently easier to hit an edge case that simply hadn't turned up before. 


   Review edge cases and protection. Commonly this can be things like testing null value and negative value cases, large data cases.  This will be one of the most painful and tedious things to put into automated testing. Clear layer type architectures using fascade patterns if necessary will make this level of isolation possible. Beware of missing a technology or code section when reverting to layer testing. Also have "try to break it" crazy data entry sessions to see if people randomly can come up with something which is a break. You can do this directly against a service using a tool like SoapUI if need be. 


  Get Professionally Load tested and verify the usage pattern: You need load testing which is done from multiple site and multiple computers. Usually doing it in house is not enough if you are serious about 24x7 9999 uptime. Mirroring the hardware is also key.  You need to invest in fully identical mirror setups for load testing of this nature or it might be useless. 


   Have extensive logging.  And point to Method places in the log don't just save the error. Number the log entries with each one getting a unique code so very quickly you can search for it, don't count on a stack trace. This is key for the 3am phone call. If you don't have enough logging to investigate a edge case that never should have happened, to  minimize the code that needs to be reviewed and checked, then you need more. 


Beware of anything with pointers: Sorry, but the big advantage of modern languages is not having pointers which we glady give up for the tiny performance hit.

  Specify development, system and regression test, production and production mirror environments RIGOROUSLY and have a strict process for change or deviation. Many times a new rev of a 3rd party tool will inject a but or unexpected behaviour so this has to be protected for throughout the process.

  Set up a review process for every failure and make sure that there is a serious attitude with all necessary parties involved. It should include:

  •    Results of the Investigation
  •    Proposal to prevent this AND SIMILAR CLASSES OF PROBLEMS
  •    REVIEW of PRIOR defects and if the company is being successful in preventing


One key thing for this kind of review session is that a lackadaisical attitude "the bug is fixed" can lead to continued problems. Why did the bug happen? What about the nature or approach to the code allowed it to happen? What can we look for in code reviews. If it's a system or 3rd party related issue then review if there are options or how you can work with the vendor to ensure not just a bug fix, but that the whole class of problem is reviewed.

I hope this helps you begin to think of all the issues with seeking 24x7 9999 uptimes with modern technologies. I did not go into the process side because already there is much written on the six sigma and gemba kaizen methodologies. In the end, clean well organized code and architecture is MORE important to six sigma success with technology than review process and formal process definition. Stressing code elegance and software as an ART not a commodity is key for management, which also means you cannot treat your engineers like commodities with a basket of skills either.  The art of good code is taught from good senior engineers on down and never in school and never in most development companies. Keep that in mind when chosing your senior team.







x