Friday, November 18, 2016

FAILURE ANALYSIS


License To Ill
Beastie Boys album cover (1986)

    "I wish to point out that the very physical development of the insect conditions it to be an essentially stupid and unlearning individual, cast in a mold which cannot be modified to any great extent. I also wish to show how these physiological conditions make it into a cheap mass-produced article, of no more individual value than a paper pie plate to be thrown away after it is once used. On the other hand, I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity."
      — "The Human Use of Human Beings: Cybernetics and Society" (1950) by Norbert Wiener

I have some remarks about failure, and how we relate to and learn from it. Some are abstract, philosophical, general observations, others are concrete, personal, specific life lessons, so that's how I've sorted them out.

ABSTRACT, PHILOSOPHICAL, GENERAL OBSERVATIONS

Norbert Wiener, quoted above, was a mathematical genius who coined the word cybernetics for the science of control and communication. He explained that a cybernetic system used measurements of the amount of error to determine how to take corrective action, making error a vital part of the process.


diagram of centrifugal "flyball" governor
( wiki.eanswers.com/en/Centrifugal_governor )

A classic example is the centrifugal "flyball" governor on early steam engines. The twin balls on the spinning shift are flung farther from the axis as the steam engine speeds up, and the difference (or error) between the desired and actual speed triggers a lever to turn the throttle up or down. The measured error is vital to the mechanism's operation.

Or, as English poet William Blake said,

    "To be an error and to be cast out is a part of God's design."

      — "Complete Poetry and Prose of William Blake" (1850)

The School of Hard Knocks

What Norbert Wiener was arguing in the quote above, from his book written for the general reader, is that humans should be saved for tasks which require a certain amount of trial-and-error learning, since we are so good at that, and give the tasks requiring rote learning and canned responses to the machines to do for us. In other words, as automation increases it will be our job to make mistakes and learn from them, and what we have learned will passed on to machines that rarely make mistakes or learn from them.

Another 20th century technologist who talked about the importance of mistakes was R. Buckminster "Bucky" Fuller. In his 1968 book for the general reader, "Operating Manual For Spaceship Earth,"

Fuller teases with the title, and then reveals that there is no operating manual, and he certainly didn't write one.

    "Now there is one outstandingly important fact regarding Spaceship Earth, and that is that no instruction book came with it. I think it's very significant that there is no instruction book for successfully operating our ship. In view of the infinite attention to all other details displayed by our ship, it must be taken as deliberate and purposeful that an instruction book was omitted. Lack of instruction has forced us to find that there are two kinds of berries - red berries that will kill us and red berries that will nourish us. And we had to find out ways of telling which - was - which red berry before we ate it or otherwise we would die. So we were forced, because of a lack of an instruction book, to use our intellect, which is our supreme faculty, to devise scientific experimental procedures and to interpret effectively the significance of the experimental findings."

Fuller's domes have proved to be the most stable structures built by humans, and he discovered their secrets by trial and error with little formal education.

Many other speakers and authors have praised trial and error learning. Self-help guru and life coach Tony Robbins has told the tale of how he decided he wanted to get good at delivering seminars, and so he scheduled as many a day as he could, figuring that the more he did it the better he'd get. In "The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It" (2004) by Michael E. Gerber,

the author tells a similar tale of wanting to get good at management consulting, and so taking as many jobs as he could get (at a reduced rate) until he gained proficiency. I imagine in each case that the early customers didn't get such a good deal, but ultimately they both became excellent at what they do.

The Second System Effect

It is not just the "trial" which essential, but the "error" as well. There is ample evidence that we sometimes learn the wrong lessons from too mush success.

One of the first books ever written about software project management, "The Mythical Man-Month: Essays on Software Engineering" (1975) by Frederick P. Brooks Jr.,

introduces the concept of the "second system effect," described this way by WIkipedia:

    ...the tendency of small, elegant, and successful systems to be plagued with feature creep due to inflated expectations.

    The phrase was first used by Fred Brooks... [to describe] the jump from a set of simple operating systems on the IBM 700/7000 series to OS/360 on the 360 series.

Software developers and their clients have been fighting this trend ever since. The problem is more than "feature creep," it is the complacency, arrogance, and even — dare I say it — hubris of engineers and their customers and managers after a project that is perhaps too successful, paving the way for overreach and downfall. It is described in "The Soul of a New Machine" (1981) by Tracy Kidder,

in which Data General, a Massachusetts-based minicomputer company, was launching an engineering project for a new flagship computer. CEO Edson de Castro assembled his "A" team and sent them to a new R&D facility in North Carolina, an eleven hour drive away. Apparently he expected them to fail, or at least get way behind schedule on a bloated, incompatible system (same thing), because he assembled a secret "B" team in a makeshift facility near the Massachusetts headquarters and had them build what he wanted all along, a 32-bit computer compatible with their 16-bit line. It was a job few engineers wanted, but this team didn't suffer from "second system syndrome" and delivered on time with what became the next flagship product. (Coincidentally I worked at DG while all this was happening, but knew nothing about it until I read Kidder's book.)

Another famous failed "second system" was the "Pink" project A.K.A. "Taligent," a joint project of Apple and IBM intended to create a "Microsoft-killer" operating system for new PCs based on the PowerPC chip from Motorola. It did not end well.

Other examples abound. I personally witnessed the rise and fall of a "second system" known as the Stellar GS-1000, over-designed by Prime and Apollo veterans, and ultimately setting the second place record at the time for the most Venture Capital money lost by a startup ($200 million).

The story is again repeated by products ranging from the Blackberry phone by RIM to Windows Vista from Microsoft.

The moral is that "experience" needs to include a mix of successes and failures to be valuable.

The Shame of Failure, the Failure of Shame


cartoon found on a cubicle wall at NASA's Kennedy Space Center
a few months after the space shuttle Challenger explosion

    "Victory has 100 fathers and defeat is an orphan."

      — Count Galeazzo Ciano (1942)

We have a big problem in our civilization, especially where the American "western loner" mythos abounds: as a side-effect of glorifying winning and winners, we heap shame upon losing and losers. As a result people often rewrite their histories to edit out failures. It has been long known among mathematicians that most breakthroughs are made by a mathematician intuitively realizing a theorem must be true ands then working backwards to the premises that will provide a proof. But then the paper they write goes in the opposite direction, from premises to theorem, obscuring the discovery process.

In my business travels I happened upon an engineering forensics company in Silicon Valley called Failure Analysis Associates. They analyzed crashes and other accidents to learn from them. (Ironically, their major founder, Stanford professor Alan Tetelman, was killed on September 25, 1978, at the age of forty-four in the PSA Flight 182 air crash over San Diego.)

I was impressed by the quality of their work, but I guessed that their name (which provides the name of this blog entry) would come to be a hindrance. I expected it would be harder to get people to write checks to a "failure" company. Sure enough, the later changed their name to the innocuous "Exponent."

(I realize I'm taking a risk by putting the word failure in this blog entry's title, but I'm also encouraging folks like you to be courageous in your own failure analysis.)

I for one have learned not to trumpet failures, since they spook people so much, but I treasure the lessons from them. As business educator Marshall Thurber once said, "You're either on the winning team or the learning team." It's important to move beyond whatever shame comes with a failure, and to embrace the learning experience, even if the rest of the world doesn't see it that way.

Who Can Draw the Learning Curve?


learning curve illustration from Wikimedia

Google defines "learning curve" as

    the rate of a person's progress in gaining experience or new skills.

    "the latest software packages have a steep learning curve"

When I hear people use the term "steep learning curve" they are typically describing something hard to learn, like climbing a steep hill is hard. But looking at the above graph, the blue curve is the steep one, in which proficiency increased faster, so learning was easier. I have concluded that most people haven't given the concept much thought, and so have no idea what they are saying when they use the term.

I feel fortunate to have studied with a professor in college, Dr. Gregory Bateson, who gave the concept a whole lot of thought. He definitely practiced the old alleged Einstein aphorism:

    "Everything should be made as simple as possible, but no simpler."

Though Cambridge-educated in Queen's English, Latin and Greek, he had a focused way of thinking and speaking that managed to bring clarity even to Santa Cruz, California college students. He helped me understand that the so-called "learning curve" was really a relationship between curves, like the red and blue curves above. We would expect the blue curve to happen after the red curve if it was the same learner, since people get better at learning. Bateson called this Deutero-Learning, or Learning II, learning to learn. He first mentioned it in 1942, in a short paper called "Social Planning and the Concept of Deutero-Learning," reprinted in "Steps To an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution and Epistemology" (1972).

    "It is a commonplace that the experimental subject — whether animal or man, becomes a better subject after repeated experiments. He not only learns to salivate at the appropriate moments, or to recite the appropriate nonsense syllables; he also, in some way, learns to learn. He not only solves the problems set him by the experimenter, where each solving is a piece of simple learning; but, more than this, he becomes more and more skilled in the solving of problems."

He referenced Clark L. Hull's 1940 publication, "Mathematico-Deductive Theory of Rote Learning: a Study in Scientific Methodology."

Bateson revisited this material in 1964 in the paper "The Logical Categories of Learning and Communication," reprinted in "Steps To an Ecology of Mind" (1972), op. cit. He explained how Hull was a pioneer in attempting to quantify psychology and plotted real learning curves along the way.

    "In human rote learning Hull carried out very careful quantitative studies which revealed this phenomenon, and constructed a mathematical model which would simulate or explain the curves of Learning I which he recorded. He also observed a second-order phenomenon which we may call "learning to rote learn" and published the curves for this phenomenon in the Appendix to his book. These curves were separated from the main body of the book because , as he states, his mathematical model (of Rote Learning I) did not cover this aspect of the data."

Note that Hull was attempting to define and validate a simple mathematical model of learning, and the Learning II data didn't fit his model, so he bumped it to an appendix!

Bateson revisited this material because he was studying the causes of schizophrenia and other mental illnesses, and he hit upon the idea that mental illness might be triggered by Learning II that is in error, i.e. that learns the wrong lessons about how to learn. From this I learned that now and then you have to examine your learning-to-learn strategies, review them, and possibly change them; this is a type of process Bateson called Learning III.

Dead Men Tell No Tales

A major challenge of learning from mistakes is that some of them are fatal. Ecologist Ramon Margalef pointed out that predation is usually a lot more educational for the predator than the prey. Business consultant Stewart Brand cautions against studying shipwreck survivors too much; the stories of the dead would probably be a lot more informative, if they were available. This is part of the genius of the book "The Perfect Storm: A True Story of Men Against the Sea" (1997) by Sebastian Junger.

The author tells tales he can't possibly know are true, by making educated guesses and using expert opinions and available facts to paint plausible scenarios.

The effect we are talking about here is called survivorship bias, and there is a very illuminating blog entry about its discovery, in David McRaney's blog "You Are Not So Smart."

He describes the work of a World War II group of American mathematicians called the Applied Mathematics Panel, and specifically a statistician named Abraham Wald.

    How, the Army Air Force asked, could they improve the odds of a bomber making it home? Military engineers explained to the statistician that they already knew the allied bombers needed more armor, but the ground crews couldn't just cover the planes like tanks, not if they wanted them to take off. The operational commanders asked for help figuring out the best places to add what little protection they could...

    The military looked at the bombers that had returned from enemy territory. They recorded where those planes had taken the most damage. Over and over again, they saw that the bullet holes tended to accumulate along the wings, around the tail gunner, and down the center of the body. Wings. Body. Tail gunner. Considering this information, where would you put the extra armor? Naturally, the commanders wanted to put the thicker protection where they could clearly see the most damage, where the holes clustered. But Wald said no, that would be precisely the wrong decision. Putting the armor there wouldn't improve their chances at all.

    Do you understand why it was a foolish idea? The mistake, which Wald saw instantly, was that the holes showed where the planes were strongest. The holes showed where a bomber could be shot and still survive the flight home, Wald explained. After all, here they were, holes and all. It was the planes that weren't there that needed extra protection, and they had needed it in places that these planes had not. The holes in the surviving planes actually revealed the locations that needed the least additional armor. Look at where the survivors are unharmed, he said, and that's where these bombers are most vulnerable; that's where the planes that didn't make it back were hit.

    Taking survivorship bias into account, Wald went ahead and worked out how much damage each individual part of an airplane could take before it was destroyed — engine, ailerons, pilot, stabilizers, etc. – and then through a tangle of complicated equations he showed the commanders how likely it was that the average plane would get shot in those places in any given bombing run depending on the amount of resistance it faced. Those calculations are still in use today.

A similar problem in a less fatal context is the error of the missing denominator. It is only natural for example to ask how one can get rich, and then look for the answer by interviewing people who got rich. This is the premise of many self-help books, starting with the classic "Think and Grow Rich" (1937) by Napoleon Hill.

Though there is much to be learned from these magnates, we don't get the corresponding stories of people who tried to get rich and failed.

If R people tired to get rich and succeeded, while F tried and failed, the success rate is R/(R+F), which gives us a rough idea of the "odds" of success. Without the number F we can't compute this ratio. I always try to find the missing denominator whenever possible.

CONCRETE, PERSONAL, SPECIFIC LIFE LESSONS

    "Most people are skeptical about the wrong things and gullible about the wrong things."

      — Nassim Nicholas Taleb
Okay, enough abstractions. Nassim Nicholas Taleb likes to ask who has "skin in the game" as a way of qualifying potential experts. (This of course refers to American football and other contact sports in which a player may leave some skin on the playing field.) I've lost some skin — metaphorically speaking — at various points in my career, and gained some valuable lessons.

Get It Before the Second Bounce

One thing I learned while still in school is that the material I best remember from my classes were the questions I got wrong on the tests. I'm not sure why but they seems always to have been the most memorable lessons.

Once years ago in a job interview I was asked which Java package had the methods for handling sockets, and I couldn't remember. I didn't think it was that terrific a question, since it's trivial information that's easy to look up, but I still failed that particular test, and I will never again forget: the package java.io is the answer.

Recently I was annoyed by a Facebook quiz on how Floridian I am (it's my birth state) — it said I got 90% but didn't tell what I got wrong! To heck with that! I refused to "share" their stupid quiz. How can I learn anything from it?

Showing Up For Work


Round Up ride

When I was in college I had a part time job as a ride operator at the Santa Cruz Boardwalk. For a while I worked for an old carny named R. T. Carr who had been in the circus, worked as a railroad cop, built circus miniatures as a hobby, and had many amazing tales of life adventures. I liked and admired him and I think he liked me too, but that didn't stop him from firing me for being habitually late. I was in charge of opening the Round Up ride weekends at 11:00 AM. If I took a bus from campus I had to leave at 8:55 AM, and I arrived an hour and forty-five minutes early. If I took the next bus at 10:55 AM I'd be 15 minutes late. If I left by 10:30 and hitchhiked (which I realize now was a mistake) I might get there on time. I usually left by 10:30. This strategy resulted in me being habitually late, which got me fired. (A few months later R. T. passed away and I took the opportunity to re-apply and get my old job back. Sigh.) What I now understand is that I should've gotten used to leaving at 8:55 AM and being an hour and forty-five minutes early, and made the best of it, taking all of the luck factor out of the situation.

The lesson that has stayed with me is that you have to plan to be early in order to make it on time. I'm glad I learned this before I began to work professionally. When I was traveling the western U.S. helping a sales team move supercomputers, I heard of a book called "Showing Up for Work and Other Keys to Business Success" (1988) by Michael & Timothy Mescon.

In the pre-Amazon days I went to some lengths to obtain a copy, but I just had to have it. Here is a sample:

    A story: once there was a young man who attended school in a large urban university as a part-time student while holding down a full-time job. After nine arduous years of work and study, study and work, he completed his studies. He went to his professor and said, "I'm ready to graduate. I want to be successful. I need your advice."

    The professor looked at him for a moment and asked, "Are you absolutely certain you want to be a success?"

    The student assured the professor that "After nine years of being a second-class citizen, I am committed to the idea of success," and repeated his request for advice.

    "In that case, I will advise you," said the professor. "Show up."

    The student was stunned. "Do you mean to say that after nine years of paying tuition, attending classes, passing exams, and studying, you're saying all I have to do to succeed is to show up?"

    "Well," said the professor, "that's actually the truth only about 70 percent of the time. But if you want to increase your odds, show up on time. And if you want to devastate virtually all competition, show up on time, dressed to play. Chances are, you won't even have to break a sweat."

These days I've been hearing tales of a surge in flaky employees, and the dreaded "no call, no show" which was practically unheard of when I started my career. My ingrained habits of promptness and reliability continue to help me stand out.

The Impossible Is Less Likely

    "can't happen — The traditional program comment for code executed under a condition that should never be true, for example a file size computed as negative. Often, such a condition being true indicates data corruption or a faulty algorithm; it is almost always handled by emitting a fatal error message and terminating or crashing, since there is little else that can be done. This is also often the text emitted if the 'impossible' error actually happens! Although 'can't happen' events are genuinely infrequent in production code, programmers wise enough to check for them habitually are often surprised at how often they are triggered during development and how many headaches checking for them turns out to head off."

      — definition of "can't happen" from hacker-dictionary.com

Painful lessons have taught me habits that I adhere to firmly. Very occasionally I test my rules by breaking them (also known as tempting fate) and I usually regret it. Some rules include:

  • never carry opened containers of fluids in baggage (factory sealed is OK)

  • bring a spare suit of clothes along to customer meetings

  • never take any food or drink into a trade show booth, except water

  • when locking a car door or building door, hold onto the key while you do it

In each case I am working to reduce the possibility of calamity. For example, if it is impossible for me to lock a key inside a door, it happens less often.

Five Scenarios

I've found it's tough to get people to do disaster planning. There are obviously superstitious reasons for this: sometimes people don't want to "jinx" things. There is also a certain amount of denial involved: if you don't plan for it, maybe it can't happen.

In my own household in Southern California, we recognize that power outages, wildfires and earthquakes are our biggest risks from natural disasters, and have planned accordingly. This includes:

  • having a planned meeting place if we are forced from our home and communications are down (actually we have two: one walking distance, and the other outside the wildfire risk area)

  • having a designated contact person out of town (disasters are nearly always local, and it's good to have an unaffected party for everyone to check in with)

  • all of our vehicles have disaster preparedness kits (see the last blog entry, GADGETS FOR THE TRAVELING TECHIE, August 2016, for more info).

But we know plenty of folks — most, actually — who have no plan.

When this happens on an institutional level, there can be bad consequences. One thinker who has made a career out of analyzing these types of problems is Nassim Nicholas Taleb, especially in his book "The Black Swan" (2007).

He points out that low-probability high-impact events are very hard to accurately forecast because they are rare, and give us few examples to analyze statistically.

The best tool I have ever encountered for getting organizations to do this paradoxically important yet non-urgent and possibly unneccessary preparation is a technique called "scenario planning." This set of slides from Maree Conway at Swinburne University of Technology provides a good overview:

The technique, though not developed by the Global Business Network, was popularized by them.

    Unlike forecasting which extrapolates past and present trends to predict the future, scenario planning is an interactive process for exploring alternative, plausible futures and what those might mean for strategies, policies, and decisions. Scenario planning was first used by the military in World War II and then by Herman Kahn at RAND ("Thinking the Unthinkable") during the Cold War, before being adapted to inform corporate strategy by Pierre Wack and other business strategists at Royal Dutch/Shell in the 1970s. The key principles of scenario planning include thinking from the outside in about the forces in the contextual environment that are driving change, engaging multiple perspectives to identify and interpret those forces, and adopting a long view.

      — Wikipedia article on Global Business Network

    ( en.wikipedia.org/wiki/Global_Business_Network )

A quick-and-dirty description of the process is to describe five future scenarios:

  1. best-case plausible
  2. better than average outcome
  3. typical, expected outcome
  4. worse than average outcome
  5. worst-case plausible

The term "plausible" is to rule out the paranoid and impossible, like Godzilla attacking, and to focus on a range of believable predictions, and then have a plan for each of them. It doesn't seem as macabre as simply "disaster planning," it has a good pedigree, and it prevents future panic when the unexpected (but not completely unexpected) is already in the plan.

Now go forth, fail, learn, succeed and prosper!


Disclaimer

I receive a commission on everything you purchase from Amazon.com after following one of my links, which helps to support my research. It does not affect the price you pay.


This free service is brought to you by the book:

Tuesday, August 30, 2016

GADGETS FOR THE TRAVELING TECHIE


tools from Ötzi the Iceman over 5,000 years ago
from age-of-the-sage.org
(http://www.age-of-the-sage.org/archaeology/otzi_the_iceman.html)
    "This Stone Age man [Ötzi] has four pieces of stone on him (and one very exciting piece of futuristic copper metal), but almost all of his kit is totally biodegradable. If he'd been buried properly and dug up nowadays, we'd have known nothing at all about his tailored pants, his raincoat, his belt, his backpack, and especially a couple of dozen different thongs. ... The guy never made a move without a handy reservoir of string."

      — "Tomorrow Now" (2003) by Bruce Sterling

In his non-fiction futurist manifesto "Tomorrow Now," science fiction author Bruce Sterling spends some time going through the pockets of Ötzi the Iceman, concluding that five millennia ago humans liked their gadgets just as much as we do now.

Here's what I said about gadgets in my book, A Survival Guide for the Traveling Techie, in section 2.17, SHOPPING FOR TECHNOLOGY:

    An early reviewer of a draft of this book complained she'd expected more in the way of reviews of the latest handheld and mobile gadgets. Now, I'm a dyed-in-the-wool techie and I love gadgets, but I'm also a technical pragmatist and so I tend to ask 'what will help me do my job?'

    If it's up-to-date gadget reviews you want, check my blog...

I was referring of course to this blog, and though I've been writing it since 2014 I have yet to talk about any gadgets. Oops. So here we go, with the whole truth on my favorites and why.

Pioneers


image from
www.appledorebookshop.com/
pages/books/8187/william-o-steele/
flaming-arrows

I sometimes hear techies say "Pioneers end up with arrows in their backs." I hear this more frequently from sales people and managers in tech industries. I've also heard, "It's hard to be on the bleeding edge and not get cut." These homilies are arguments against being an early adopter of technology. And yet as a techie aren't we supposed be out there embracing the new?

Here's the big picture: I've found you have to divide your gadgets into two categories: early tech and late tech. An early tech gadget is one from a technology that isn't ready for prime time, and you're using it because:

  • you think it represents the future, so you want to get in on the ground floor

  • you want to learn all about it

  • let's face it, you love it

You can only afford a few of these. The pros of being an early adopter of a new technology are:

  • You get to learn all about it first.
  • You get to impress your geek friends with your new, bleeding-edge gadget.
  • You hopefully gain a new capability before most other people.
On the other hand, the cons of being an early adopter are:
  • It can be very aggravating.
  • There is no guarantee that the technology will ever deliver as advertised.
  • Even if you pick the right tech you may end up with the wrong vendor, becoming proficient in a proprietary solution that goes away.
  • You can end up being very annoying if you don't learn to curb your enthusiasm.
  • You may end up learning the wrong lessons, including some "learned helplessness" if you conclude that X is hard, and later it becomes easy. (This happened to me with voice recognition, optical character recognition, and some other low-level Artificial Intelligence applications.)
So carefully choose which early tech you devote mindshare to.

With me my early tech gadgets and fads have been Personal Computers (PCs), 3-dimensional computer graphics (3D CG), Visualization, Digital Video (DV), Global Positioning System (GPS), Smart Phones and Big Data. I'm sure you have your own list.

The rest of the gadgets in your life are late tech, you have them because you need them and you want the minimum amount of nonsense from them, so you get mature technologies. I have been this way about (non-smart) cell phones, laptops, databases, cars and pocket knives. Again you have your own list.

My List of Most Useful Items


image from the cover of my book
"A Survival Guide for the Traveling Techie"

Some of the items listed you can't take on a plane, so don't.

In my family we used to say all you needed was duct tape and WD-40. If it moved and it shouldn't, duct tape it. If it didn't move and it should, use WD-40.

Later when the creator of the Whole Earth Catalog, Stewart Brand, wrote a magazine article attempting to summarize his massive catalog on surviving and thriving in Western Civilization, he listed duct tape and WD-40 and a Swiss Army knife.

If you scrutinize the photo above, most of the items I used to carry around as late as 2005 (GPS, camera, music player, flashlight, flip phone, maps, reference books) are now incorporated into a smart phone. Less romantic I guess, but more efficient.

Here is my high-priority list of what's on my "Bat Belt":

  • Swiss Army knife
    I prefer a model with scissors, awl, corkscrew (useful for untying knots), can opener, toothpick, tweezers, bottle opener, flat blade and Phillips screwdrivers, and two knife blades, and without a saw (too many knuckle cuts)
  • WD-40
  • duct tape
  • iPhone
    apps:
    • GPS/navigation/maps
    • NOAA Weather
    • Wolfram Alpha
    • TouchTerm
      can actually ssl into a shell
    • Yelp
    • Runkeeper
    • Starbucks
    • Facebook & Twitter
      to stay in touch with family and friends; also remarkably good at alerting you to breaking news events
    • Netflix & Hulu
    • SoundHound
    • Tango
    • Find iPhone
  • iPod classic
    content:
    • music mixes
    • videos
  • iPhone/iPod accessories:
  • backup maps

    for when your phone dies or all the phones die; I like to carry Thomas Guides when I can, otherwise I print out local maps from Google and get regional maps from AAA

  • backup phone list (hard copy)
  • surge protector
  • 3-wire extension cord
  • 3-wire to 2-wire adapter
  • battery powered analog alarm clock
  • good luck charm
    I have a Mushu dragon plushie my daughter gave me which is remarkably effective at preventing hardware failures
  • backpack
    or laptop bag, for carrying laptop and etc.
  • books to read
    or you may prefer a tablet or e-reader
    currently reading: Death March (2nd Edition) (2003) by Edward Yourdon


    and Neutra (2004) by Barbara Lamprecht

  • pointer (telescoping or laser)
    I once worked for an old-school CEO yachtsman from Pittsburgh kept saying you should never point with your finger
  • office supplies:
    • portfolio
      pick a color scheme: silver and black, or gold and brown (ranging from burgundy to tobacco)
    • pad of paper in the portfolio
    • nice pen in the portfolio
    • fine point permanent pens
    • colored pencils
    • "magic" tape
    • scissors
    • hole punch
    • stapler
    • golf pencils to carry in your pocket — never put an ink pen in your pocket
  • gum
    to chew if you start to fall asleep in a meeting or driving

My List of Nice To Have Items


image from dailymail.co.uk
( i.dailymail.co.uk/i/pix/2015/02/24
)

These are things I usually leave home but sometimes bring on the road as needed.

  • MacBook Pro 13 inch with external disk drive (Touro brand) used by Time Machine backup software, HiDef monitor, cables, external mouse
  • printer/scanner
  • Compac PC laptop with cables and mouse
  • Linux system with cables and mouse
  • mouse pad
  • Apple TV with cables
  • HiDef video projector with cables
  • Xbox or other game system, with cables
  • disaster/survival preparedness kit:
    • water
    • compass
    • 12 v water boiler
    • cups, bowls and utensils
    • ramen
    • snacks
    • cans of Spaghettios
      for hungry kids especially
    • mirror
      if lost in the wilderness you can signal planes with a hand mirror
    • bedding
    • space blanket
    • waterproof matches
    • crowbar
      rescuers in earthquakes report needing these

A Final Note: Narcissus Narcosis


Narcissus taking a selfie, from
stephendpalmer.com/selfies/

If you don't know who Marshall McLuhan was you probably should. He was a self-styled media critic, and gave us such aphorisms as: The medium is the message. He began his career as a literary critic in Toronto, specializing in Edgar Allan Poe, but ended up a world-acclaimed expert on the effects of new media on people's behavior and perceptions. He died in 1980, but when WIRED magazine began publishing in 1993 it adopted him as a sort of patron saint of the internet and interactive technologies, spreading his fame and influence further.

In his speeches and writings he was amazingly prescient, and he manages to shed some light on our 21st Century gadgets. He predicted what sounds amazingly like an internet-connected smart phone in the 1960s and '70s, as described by his biographer Phillip Marchand in 1989:

    "He told an audience in New York City shortly after the publication of Understanding Media that there might come a day when we would all have portable computers, about the size of a hearing aid, to help mesh our personal experiences with the experience of the great wired brain of the outer world."

McLuhan used the term "Narcissus as Narcosis" to describe our obsession with images of ourselves reflected in our gadgets. (In Greek mythology Narcissus of course was the ultra-attractive man who fell in love with his reflection in a pool. His name comes from the same root as narcotic.) Recent bloggers have noticed the similarity to the phenomenon of "selfies."

McLuhan wrote in chapter 4, The Gadget Lover, Narcissus as Narcosis, in "Understanding Media: The Extensions of Man" (1964):

    The Greek myth of Narcissus is directly concerned with a fact of human experience, as the word Narcissus indicates. It is from the Greek word narcosis or numbness. The youth Narcissus mistook his own reflection in the water for another person. This extension of himself by mirror numbed his perceptions until he became the servomechanism of his own extended or repeated image. The nymph Echo tried to win his love with fragments of his own speech, but in vain. He was numb. He had adapted to his extension of himself and had become a closed system. Now the point of this myth is the fact that men at once become fascinated by any extension of themselves in any material other than themselves...

He elaborated in 1961 in an interview in Playboy Magazine:

    ...all media, from the phonetic alphabet to the computer, are extensions of man that cause deep and lasting changes in him and transform his environment. Such an extension is an intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what's happening to it. It's a process rather like that which occurs to the body under shock or stress conditions, or to the mind in line with the Freudian concept of repression. I call this peculiar form of self-hypnosis Narcissus narcosis, a syndrome whereby man remains as unaware of the psychic and social effects of his new technology as a fish of the water it swims in. As a result, precisely at the point where a new media-induced environment becomes all pervasive and transmogrifies our sensory balance, it also becomes invisible.

    This problem is doubly acute today because man must, as a simple survival strategy, become aware of what is happening to him, despite the attendant pain of such comprehension. The fact that he has not done so in this age of electronics is what has made this also the age of anxiety...

Something to ponder as you find yourself sucked into the invisible web of your connected gadgets...

Disclaimer

I receive a commission on everything you purchase from Amazon.com after following one of my links, which helps to support my research. It does not affect the price you pay.


This free service is brought to you by the book:

Saturday, April 30, 2016

WORLD WIDE WEB THE NEXT GENERATION
~ OR ~
WHAT I LEARNED AT SAN DIEGO STARTUP WEEK


Startup Funding Milestones from Startup Garage (artwork by Nicole's Design)
nicolesdesign.com/evolution-of-audience
    Web 2.0 describes World Wide Web sites that emphasize user-generated
    content, usability, and interoperability. The term was popularized by
    Tim O'Reilly and Dale Dougherty at the O'Reilly Media Web 2.0 Conference
    in late 2004, though it was coined by Darcy DiNucci in 1999.

      — "Web 2.0" Wikipedia, the free encyclopedia

I wish I'd written this last summer when the events were fresh in my mind, and when the news I have was less stale, but life intruded.

A Paragraph With Footnotes About Hypertext

Ah yes, hypertext. We get the concept from Ted Nelson [1], who mostly drew comics [2] about his ideas. The first widely-distributed implementation was Hypercard [3] and then the first Internet-based implementation was the World Wide Web (WWW).

It makes sense to say that the Web 0.0 began with Tim Berners-Lee [4] at Conseil Européen pour la Recherche Nucléaire (CERN) [5] in Meyrin, Geneva, Switzerland, and his simultaneous invention of these protocols:

  • the Uniform Resource Locator (URL) [6],
  • the HyperText Markup Language (HTML) [7],
  • the HyperText Transfer Protocol (HTTP) [8],
as well as these programs:

  • the first prototype web browser, WorldWideWeb [9],
  • the first prototype web server, HTTPd [10].

His browser did not include the ability to embed images in a page; you had to click a link and the image opened in a separate window. His web server did not allow the creation of any dynamic content, serving up only static pages.

Marc Andreesen [11] improved upon this with the creation of the Mosaic web browser [12], while at the National Center for Supercomputing Applications (NCSA) [13] in Urbana-Champaign, Illinois, which did allow images on a page — an innovation I have compared to the photonovela [14] in Spanish language publishing.

Add in the Common Gateway Interface (CGI) [15] which allows dynamically generated content, and you have all the pieces of what many call Web 1.0.

And of course Andreesen and some investors and associates formed Netscape Communications [16], and productized Mosaic as the Netscape Navigator browser, and then added the JavaScript language [17] to it — an event which I'm pretty sure no one at the time realized would lead to the popularity of JavaScript today, and all the tricks that programmers have learned to do with it as the web has evolved.

This section is a metalogue, in that it both explains and illustrates a concept. The original "spec" for the WWW was basically clickable footnotes, like you see here.

Notes:

Onward

Wikipedia tells us above that "Web 2.0 describes World Wide Web sites that emphasize user-generated content, usability, and interoperability." This includes sharing sites like YouTube and Flickr, social sites such as Facebook and Snapchat, collaborative sites such as Yelp and Wikipedia, and other innovations like multiplayer games and sites where you can caption cat pictures, crowd-sourcing everything from music curating to mate selection.

The pundits then predicted that Web 3.0 would be the "semantic web."

I think they're wrong and I'll tell you why. I have found that when people break a problem in language processing down into "syntax" and "semantics" what they really mean is "the part we know how to automate" and "the part we don't know how to automate." Whenever we make a breakthrough in the semantics column it moves over into the syntax column. You see the problem.

A more pressing problem is that the whole Semantic Web field is foundering in academic doldrums, and the chances of getting the billion web sites in the world to add some kind of "semantic sugar" to make the semantic web work seem slim to me.

Instead, it is clear to me, and some other self-appointed experts, that Web 3.0 is the mobile web, running on mobile devices, GPS-enabled, and inter-operating with the ecology of mobile apps, especially on Android and iPhone.

As for the challenge of getting billions of websites to add some "mobile sugar" it turns out that Google is accomplishing that with economic coercion — more on that later.

One of the best examples I've seen of a "killer app" for web-enabled mobile devices is the restaurant wait system from Waitlist.me, which my wife and I experienced on our last anniversary at a charming outdoor restaurant on Mission Bay called the Barefoot Bar.

The hostess asked for my cell number, and then texted me a link to a smartphone-friendly page to track our progress.


6/21/15 iPhone screen shot from the
Waitlist.me mobile-enabled web app

Instead of giving us a bogus wait time estimate, since who really knows how long each table will be full, we were shown a list of the parties ahead of us, their size, and how long they had been waiting, so we could form our own conclusions. There's something refreshing about not being lied to. I realized we were seeing the way of the future — single-function web ecosystems with very well-solved problems presented with excellent user experiences.

I later found out Waitlist Me was launched in 2012, founded by ex-Googlers, and financed by the Venture Capital (VC) fund Andreessen Horowitz among others. Remember Marc Andreesen (above)? Now he's picking winners for VCs.

Meanwhile Back at the Meetups

I have been involved with a computer graphics society called SIGGRAPH

for almost thirty years, helping to run the San Diego professional chapter for a dozen of those years, and also attending local meetings in Los Angeles and a number of international conferences. I've also attended meetings of the Final Cut Pro user group in L.A.,

and the Kernel Panic Linux User Group (KPLUG) meetings in San Diego.

One thing that all of these events have in common is that they are mainly publicized by web sites and email lists. In our local SIGGRAPH chapter meeting attendence has been slowly declining for years. Then something interesting happened. We got a new chairman and he began holding workshops on building digital gadgets, and they were very well attended. I assumed at the time that the content was more interesting to folks, but one significant detail was that he publicized the meetings through the Meetup web site.

Only later did I begin to wonder if that was also a reason for the larger attendance.

In June of 2015 my friend Steve P. peer-pressured me into attending a meeting of the San Diego JavaScript Meetup. Now, I don't even like JavaScript. I've mostly ignored it since Netscape slipped it into the browser, except for a few small chores it was good for, but good golly Miss Mollie this language has exploded, and I learned a lot about its modern usages and associated tools from the meeting. But there were also some things in general that I noticed:

  1. incubators

    The meetings were in a corporate "incubator" called The Vine in the expensive heart of downtown San Diego. Now, I have seen startups in cheap spaces like warehouses, barns, and old factory lofts, but never before in a downtown high-rise.

  2. subsidized downtown space

    This lead me to the discovery that this incubator was sponsored by The Irvine Company, which I know well because my wife's job has them as a client. Apparently they are giving free space to some qualified startups. This caused me to wonder, what's in it for them? Are they getting future commitments of rent, or equity, or are they just trying to keep vacant commercial property from looking abandoned while trying to kick-start the San Diego startup economy?

  3. open plan

    I also noticed, and verified through other recent experiences, that the new way of office layout is the open plan, in which workers have less space than they would sitting at Starbucks. I've always worked in software jobs where an office with a door that closes was a perk, and a cubicle was what you got otherwise. These "open plan" workspaces, without even a place to hang a family picture, make cubicles look good to me.

Inspired by the high information value of this event, on a topic I didn't even care much about, I attended some more meetups in the next couple of months.

  • another JavaScript meetup, at which I learned that (according to a show of hands at two sequential meetings) the number of JavaScript jobs is increasing and exceeds the number of job seekers, which is decreasing

  • a Big Data meetup, at which I learned that it is a constant struggle for both employees and companies to be perceived as white collar tech and not blue collar tech

  • a Kickstarter meetup, at which I learned you need to have a thriving, professionally-managed social media presence before you launch your Kickstarter project

But the most important thing I learned, no doubt, was that in a few weeks and event was coming called San Diego Startup Week.

Startup Pub Crawl

My initial motivation in attending San Diego Startup Week was to search for a list of San Diego startups, to which I might pitch my consulting services. I did find that, on their web site — I actually didn't need to attend to get it! Out of 85 attendees who listed institutions, some were VCs, some were journalists, some were in academia, some sold consulting or catering, but a handful were small local startups. (By my analysis, the typical San Diego startup is Zesty.io.)

But there were things I learned by attending that were pure gold. A few things I noticed (and also at the above-mentioned meetups):

  • it was a young crowd, with more women and minorities than I have seen at tech events in the past

    That same summer I went to a Linux user group and counted 23 attendees, all male, most over 30 and Caucasian. That was an event about IPv6 packets. But here at SD Startup Week there were very many nationalities, many women, and a conspicuous absence of the "pointy haired boss" trope from Dilbert comics. For a long time there have been attempts to get more diversity into tech. I wonder why it's happening here and now.

  • round portraits and first names

    Every web presence associated with this event seemed to have little round portraits of people and only their first names. The pages looked great on my iPhone.

  • beer-fueled

    Everything seems to happen over beer. Beer is served at meet-ups. There was a "startup crawl" involving beer. Pub crawls are common working/social events. San Diego's reputation for microbreweries probably feeds into this.

  • eager to help other businesses

    Many startups sell services of crafting business plans, getting funding and running social media to other startups, often as a sidelight to their original product plans. They also tend to give away free information as a lure. See the diagram at the top of this blog for an example.


One session I attended was called "Website Creation and Useful Tools" presented by Desiree Eleanor and Nicole Leandra of "Creators Ink." I thought this was going to be about web site programming, but instead it was about getting workable web sites up without programming. A few highlights:

  • almost everything can be done with web-enabled tools, often free

  • those round portraits I mentioned are part of a new mobile-friendly look for JavaScript-powered web pages, with expanding sections all in one page, vertical scrolling but no horizontal, all made possible by HTML5/CSS and increasing memory on most people's computers/phones

  • at the lowest level tools for this "new look" are often created with the Angular.js toolkit

  • most users are engaging at a higher level, with themes (templates) from web hosting sites like Wordpress

  • to be mobile-friendly a user must select a responsive theme for their web site, which is a code word for "mobile-friendly"

  • Google is "holding a gun to the head" of commercial web sites by down-ranking them if they are not mobile friendly, as of April 21, 2015

  • the response of the Venture Capitalists has been to aggressively fund start-ups that are solving this problem

Here as a public service are my raw notes from Design Ink's session:


San Diego Startup Week notes 6/19/2015
Website Creation and Useful Tools

- DesireeEleanor@stcrnmdesire.com
- NicoleLeandra@nicolesdesign.com (http://nicolesdesign.com)
#hashtag: @creatorsink
bit.ly/CreatorCommunity

simple: GoogleSites, Weebly, Bigcommerce, Squarespace * (more robust)

bit.ly/CMSchart
justhost.com - hosting
check out zesty.io
{ theme forest }
PICK A RESPONSIVE THEME to make Google happy
{ crazy eggs }

** audience member: blitzmetrics.com/all "Facebook For a Dollar a Day"
   "growth hacking event"

* A/B split testing bit.ly/ABsplittest

nugget:
-------
image titles!  make your home page title full of keywords

tools:
------
* mailchimp
* google analytics
* bit.ly
* FB advertising
* IFTTT if this then than [zapier]
* click to tweet
* Wordpress Plug-ins -- SEO Ultimate, Woocommerce & Extensions, GA

Follow-Up

Well, I've had most of a year to think about what this all means. I relate it to my experiences with HTML5 and CSS. I tried my hand at these core level tools after much study, and created some pages like this one:

They looked great on Mac and PC laptops with Firefox and Chrome, but on an iPhone with Safari they were kind of hinky. I looked on Craigslist for a consultant who could help me — preferably someone with graphics arts skills and CSS chops. Well, I didn't find such a person, but I found many organizations were looking to hire such a person. Things that make you go "Hmmm."

I recently explained my problem to friend and HTML5/CSS instructor Jodi R. She said I should be using a responsive theme at some site like Wordpress. Ah-ha. We talked about the bigger picture. Every little shop trying to use CSS formatting hit problems like I did, with having to test and debug on every platform. This created a "tier" in the market for the Wordpresses of the world, to hire the best CSS talents and create themes for the rest of us to use. So I guess I should go with the flow.

And meanwhile, Google has just tightened the screws yet again:


Disclaimer

I receive a commission on everything you purchase from Amazon.com after following one of my links, which helps to support my research. It does not affect the price you pay.


This free service is brought to you by the book: