Made in PGH &rquo; Established 2009

All It Takes Is One.

Alright, we’ll bite. 2013 has been good to us. Maybe it’s because we’re at the top of the first page of Google results for “Pittsburgh web design” or maybe it’s because in our third year in business word is getting around. Either way, leads are up, win percentage seems to be about the same. We’re a .500 ballclub, it seems. We bail on many projects we know we could win that aren’t right for us, and we overbid on some that we might otherwise take easily if we weren’t already full. Even at a small company, business development feels like a full-time job. For every 10 leads that come in, five are worthless, three disappear before a proposal can take place, and one slips away to an incumbent, a cheaper option, or a better salesman. One, however, that vital, beautiful, fragile one goes on to become a real, honest-to-goodness project that lets us keep the lights on and the t-shirts flowing.

Update: We’ve had a project minimum in place for at least a year ($15,000), but we don’t require a questionnaire before having a call. We also insist on talking with all potential clients before we submit a proposal, and we never fill out RFPs. YMMV. Finally, we second Joe’s link to the ChangeOrder blog. Check out his book for a helpful what to do / not to do for design professionals.

Going Off-Canvas without Taking JavaScript

A guest article for the tremendous 24ways: “Infinite Canvas: Moving Beyond the Page“.

An excerpt:

There is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That’s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it.

Design Is a Job.

My web design heroes were never the people who could churn out the sexiest pixels or craft the most bulletproof code. I guess it’s because I’d spent my first professional decade as an account manager (not a designer), mostly at run-of-the-mill web agencies where keeping the lights on was an achievement worthy of celebration. Issues of client management and project selection were never up for debate, despite my frequent protests. That’s why my heroes were the professionals, the ones who wrote about clients, contracts, communication. So when Nate and I started hatching the plan that would become Full Stop, our manifestos were written by people like Andy Rutledge, and Jason Fried, and David Sherwin.

And Mike Monteiro.

Mike’s first book, “Design Is a Job” was released to the public yesterday. It’s very good. It’s funny and poignant and incisive. But more than that, it’s important. With apologies to the other incredible A Book Apart authors, I’ll go on record as saying it’s the most important book they’ve released to date. Why?

Many industry publications are task-focused “how-to” books—how to code CSS3, how to use Illustrator, how to install ExpressionEngine—and there’s certainly a place for them. “Design Is a Job” is a how-to book of a different kind. It’s about how to sell your craft to a customer with precious little understanding of why they need it. It’s about how to stand up for what you know is right when the easy (and often more lucrative) option is to roll over. It’s about how to protect yourself in an industry where it’s frighteningly easy to get fucked. It’s about how to become an adult when others would just as soon stay children. It’s empowering. Reading “Design Is a Job” is like reading a canonized compilation of the scribbles and notes Nate and I collected during the formation of our company. For us, it’s validation. For others who haven’t quite figured it out yet, it’s nothing short of a call-to-arms.

It’s also timeless. We work in a temporary industry; what’s fashionable or relevant today may be passé or outright false tomorrow. Most design publications—especially of the web design variety—are obsolete (or at least due for a new edition) after a few years. That’s where Mike’s book is different. It would’ve been good 20 years ago, and I’m confident it’ll still be good in another 20 years.

Mike is a polarizing figure to be sure. Some people think he’s a dick on Twitter (he is). Some think he’s a marketer and self-promoter nonpareil, the closest thing we have to a cult of personality within web design (he’s that too). For my part, I’ve always looked up to him. I’ve been lucky enough to get to know him a bit, and I now think of him as a kindred spirit, me in 15 years, something like that. After all, I’m an inveterate asshole, and he’s the most successful asshole I know.

Whatever you think of Mike, one thing is for sure: he’s given us all a gift in “Design Is a Job.” Well, he hasn’t given us anything. You have to fucking pay him for it.

SXSW 2012

We just got back to warm and sunny Pittsburgh from cold and wet Austin. A few quick bullet points about SXSW Interactive 2012:

  • It rained. A lot. Damn near the whole time, in fact (although Sunday was gorgeous).
  • Of all the people we wanted to see and hang out with, only about 25% actually made the trip to Austin. Contrast that to last year, where if you rolled a grenade into The Ginger Man after 9pm, web design as we know it would cease to exist.
  • This time around, we focused on hanging out and eating. Some highlights: Hopdoddy Burger Bar, Torchy’s Tacos, 24 Diner, and Kerbey Lane. We drank a lot of milkshakes and Dublin Dr. Pepper (which is disappearing within weeks). And then there was the BBQ. You know what, BBQ deserves its own bullet point…
  • Franklin BBQ. Accept no substitutes. We went with the Happy Cog Gregs to Smitty’s Market in Lockhart—the birthplace of Texas BBQ—but the old dogs can’t compete with Aaron Franklin’s brisket. Believe the hype and brave the line.
  • Speaking of lines: imagine you’re a platinum attendee. You spent $1000+ to attend all three SXSW conferences. How would you feel about waiting in an epic line—a line that snaked around the entire convention center and at one point went outside into the pouring rain—to get your conference badge?
  • The conference? Oh, right, the conference. Well, we didn’t actually go to the conference. Spend $1200+ for three or four quality talks? We’ll pass.

If there was one good thing about this year’s limited attendance, it’s that we got to spend quality time with the following friends, old and new. Thanks to everyone for their graciousness and hospitality. You’re welcome in our town anytime:

One final note, and it’s not necessarily a positive one: SXSW is over, at least as everyone recognizes it. It’s no longer the place our entire industry coalesces. The dropoff in high-profile attendees from 2011 to 2012 was staggering. The writing was on the wall last year, but it appears the overcrowding and diluted session quality at SXSW combined with the proliferation of smaller, more focused regional conferences have become an effective Austin deterrent. If you want to meet people you respect, you’re better off heading to Build, Brooklyn Beta, An Event Apart, or Greenville Grok. This trip was a ton of fun, and we’ll be back to Austin for sure, but I doubt it’ll be during March 2013.

30 Months in 90 Minutes.

Jay and I were privileged to chat with Adam Stacoviak on Founders Talk about leaving comfortable jobs in the middle of a recession to launch Full Stop and, eventually, United Pixelworkers. It’s a conversation we’ve had dozens of times with friends and family, so it was fun to get a chance to talk about it on the record. It’s something I’ve wanted to capture in audio form for a long time.

We owe a huge debt of gratitude to entirely too many people for whatever meager success we’ve enjoyed, but that’s enough introspection for now. If you need more, check out our wrap up post from August on the first two years and our line-in-the-sand manifesto we threw down not long after we got started.

Thanks to everyone who’s worked with us, bought from us, or helped us along the way. You know who you are.

Un-Traceable.

Our long national nightmare has ended. Traceable is no longer “In Review”. The nuclear winter has begun. Traceable 1.1 has been  officially rejected by Apple for violating guideline 13.1:

Apps that encourage users to use an Apple Device in a way that may cause damage to the device will be rejected.

Traceable is (was?) an application that enables illustrators and artists to use the iPad as a portable light table to easily trace built-in patterns or photos from the camera roll. It was approved by Apple over a year ago and has been in the App Store ever since. We sold on average about five copies of the app per week. That’s not a whole lot and certainly doesn’t recoup our development costs, but we had some plans for future versions that would make it more useful and, we hoped, more profitable.

We sent the update to Apple in early January. It moved into review within a few weeks but then mysteriously stalled out. Eventually we contacted Apple about the delay. The following day I received a phone call from someone at Apple who informed us that our application violated the guidelines for inclusion in the store. Of course we were disappointed. I protested. He said there wasn’t anything he could do. So that’s the end of Traceable I suppose.

Or is it?

While we understood the bargain we struck by venturing into Apple-land, the course reversal is difficult to stomach. Not only was our app approved the first time, but it remained in the App Store for a year without incident. Moreover at least one other application exists that promises the same functionality. ((Released (and updated) after Traceable entered the App Store but before this recent rejection.))

It’s a frustrating position to be in. Apple is not capricious. They just prefer screwing developers to screwing customers. If a customer walks in with a busted iPad, Apple wants to make the customer happy. Ipso facto, Apple creates guidelines designed to prevent situations that inconvenience the customer, cost them money, or both. We just happened to get caught in the crossfire. Yet we can’t help but feel the discretion Apple has could be better applied in this situation. Our application is aimed at adults, adults who draw things for a living. If they want to use our application, they understand the (rather trivial) risks.

At Full Stop, we have iPhones in our pockets, iPads in our bags, and MacBook Pros on our desks because Apple makes the best, most affordable hardware and, frequently, the most well-designed software. The application ecosystem they have created is a tremendous boon. Making it easy for people to pay for digital goods (as iTunes did for music) benefits everyone. Customers receive more and higher quality software (because software designers and developers are compensated), software developers have a ready and willing consumer base, and Apple of course gets happy customers, happy developers, and a nice rake for playing matchmaker. Everybody wins, except when they don’t. When a conflict of interest occurs, the pecking order is clear: Apple, Apple’s customers, and finally Apple’s developers.

So far it’s been a successful strategy. As interested parties who find ourselves awkwardly occupying roles as both customer and developer, we hope Apple will continue to refine their process. More feedback, more visibility, and more consistency would be welcome. ((Also, more speed, but, hey, beggars, choosers, and whatnot.)) Once the well of trust has been poisoned none but vultures remain.

From here?

The ultimate status of Traceable is unclear. It is still in the App Store as of this writing. Should we be grateful? Should we make a few cosmetic changes and re-submit, hoping for a more favorable judgment by a different reviewer? Should we cut our losses and walk away?

Given the revenue that was being generated and the amount of other things on our plate here, the forget-about-it option probably has the most appeal. Unfortunately, we’re kind of stubborn. Traceable is a good idea, and more than a few people have saved a lot of money by purchasing a virtual light table from us. We think it’s a great fit for Apple’s platform. We hope they see the light, so to speak.

New Client Site: Union Pig & Chicken.

Tonight marks the grand opening of Union Pig & Chicken, Southern food in the heart of Pittsburgh from chef Kevin Sousa and Full Stop co-founder and designer Jay Fanelli.

We were privileged to put our HTML & CSS where our mouth is with another no-nonsense restaurant website. One page with a menu, location, hours, phone, and Twitter. And it’s responsive. Great barbecue, fried chicken, and sides are the stars of the show not music or Flash. Get in, get out.

If you’re in Pittsburgh, what are you waiting for? Go get some grub.

Union Pig & Chicken

New Client Site: Quovo.com

Quovo makes seeing your entire investment portfolio easy. Full Stop designed a spiffy new marketing site with a rigorous vertical grid and a fun <canvas> chart experiment. Lowell and the guys were so happy they asked us to polish up the inside as well. We can’t wait.

The Future of Siri.

Confession: I basically upgraded from the iPhone 4 to the 4S just to mess around with Siri. ((That, and the new cameras. The iPhone is the only camera I use. With two tiny kids, the camera comes out a lot.)) While the experience has been magically delicious in nearly all respects, one can’t help but continually bump into what feel like arbitrary walls. Siri can apply a relationship to a person (“Joel is my brother”), but she can’t change his birthday or move him to the top of my favorites list or perform thousands of other seemingly trivial actions. Like many others, I’m delighted by what Siri can do yet frustrated by the current limitations.

The Present.

Apple has cracked open a door of possibility with the introduction of Siri. It’s not the first interface to accept voice as an input, but it might be the first to do it in a way that’s both accessible to the casual user and popular enough to matter.

Those who are quick to dismiss Siri as a gimmick cite the aforementioned functional limitations, the awkwardness of speaking aloud in public places, and the latency and artificiality as compared to science fiction’s portrayal ((See TNG, among others.)). These are all true. Many features of the iPhone are unavailable via Siri. It would be weird for someone in an office or on a bus to start talking to his or her phone. (Weirder than the Bluetooth headsets people already use?) Needing to wait for Siri to transmit and fetch data from a distant server, enunciating with excruciating precision, and finding oneself at the beck and call of those chipper beeps can be disenchanting. Yet what are these but the pains of an infant technology cutting its teeth in a world of mature graphical user interfaces? Should we reject voice-driven user interfaces a priori, scorning the possibility of hardware and software improvement?

We have, right now, a useful tool and a tantalizing glimpse at what is possible. That’s more than enough for me.

The Future.

The immediate future looks clear. Apple will continue to refine the Siri experience by removing obstacles and adding features. The foundation appears to be in place for long term growth. I have had few issues with Siri understanding my speech, and that seems to be the common experience.

What we all want to know is: how soon can Apple open the app floodgate? It’s a bewitching notion. The iPhone before apps was revolutionary. The iPhone after apps, indispensable. Can the same be true of Siri?

A Hypothetical Scenario for Siri-alizing Apps.

First, let’s give Siri the ability to open apps, something that it can’t do right now. ((Application launching is something of a middle ground for me. While I believe Apple is most interested (and ought to be) in unleashing speaking and listening as a peer experience to looking and touching rather than voice as simply an alternative for your finger, I expect them to make small compromises in that direction. Essentially, there’s no reason voice shouldn’t make the whole experience richer rather than living in a one-dimensional ghetto.)) “Siri! Launch Tweetbot.” ((Where by “Siri!” I mean, “press and hold the home button until Siri launches.”)) Tweetbot appears on the screen. Because we’re smitten with this Siri thing, we want the ability to perform actions in our current context.

Consider what happens next. Since Tweetbot saves my state automatically, I’m looking at my “Sports” Twitter list. From this screen alone, I can: Change the list I am viewing, open the compose tweet screen, refresh the list, search the list, switch accounts, select a tweet as the target for additional actions, switch to my mentions, direct messages, starred tweets or profile, or view replies to a tweet. That’s one screen, and I probably didn’t even provide a comprehensive inventory of available actions.

“Refresh tweets” might be a perfectly adequate synonym for the pull-to-refresh mechanic we’ve become accustomed to, but what if I want to interact with a specific tweet? Should a “cursor” appear on the screen indicating the  currently active tweet? Of course not. Tweetbot, like every other native iOS app, has been designed with touch as the foremost interaction method. ((Apple’s incredible accessibility achievement with the iPhone notwithstanding.)) By attempting to force voice input into our current graphical conventions, we’re in jeopardy of the same errors game developers have routinely made in attempting to port joystick-based games to the touch environment. What was developed for one input, especially if the input was properly understood, is inappropriate to varying degrees for us in another. Furthermore, within this scenario, we have created for ourselves both the non-trivial job of replicating all screen functionality as voice functionality and restricted what we can do with voice to what we can see on the screen.

What’s the Alternative?

As much as I would like to see Siri become a tool for users willing to spend the time necessary to learn the interface, ((Like Quicksilver or Enso.)) Apple appears to be determined to create something else, something that hasn’t really been done before: a conversational user interface. You state a command, Siri complies (if possible) and provides feedback. It’s a much longer, more tedious process, but it might be the only one that can actually work without extensive training.

So what should Apple could do to truly embrace voice-driven user interfaces? First, abandon the traditional concept of applications. In the world of Siri, applications are incidental. Data sources matter, commands matter, natural language parsing matters—applications are the occasional byproduct of asking Siri to perform a task and having that request fulfilled. The appropriate paradigm is services. ((Incidentally, services are the one thing I want more than anything else on the iPhone today. Developers have hacked around this with custom URL structures, but it’s no substitute for the real thing.)) Instead of registering applications, developers would register a Siri service with Apple. The end user would navigate to a special section of the App Store that housed only VUI services. It’s Newsstand for Siri!

Maybe Tapbots wants to make a Siri service. Services (unlike applications) are able to be used instantly (within Siri) by simply stating the service name plus the desired action. There is no launching a service. “Use Tweetbot to read me my tweets.” Siri answers, “I am loading your latest tweets.” ((While it’s important to be generous in what Siri can accept, certain components are essential to accomplishing the desired task. At minimum, we need to include the name of the service (Tweetbot, “subject”), the intended action (read, “verb”) and the object of the action (tweets, “direct object”). Other modifiers can also be supported.))

Once Siri begins reading the tweets, we should expect her to pause after each tweet to allow us the opportunity to respond. Unfortunately today that means pressing the microphone button on the screen. If Siri is to achieve its true potential, we’re going to need to be able to invoke it by just saying “Siri!” and, nearly as importantly, we need to be able to interrupt it. ((This is no small challenge. Our phones would need to be constantly listening for this keyword which is battery killer. At this point, we’re basically talking about the including all the computational power of Apple’s data center in a hand-held device. We’re not even close.))

At this point we might say something like: “That’s funny. Let’s star that tweet.” Behind the scenes, Siri is magically parsing my cryptic human language. As we’re in the Tweetbot context, Siri knows to interpret these commands against the Tweetbot provided options. “Star” plus possibly a dozen other words can perform the same action. It might also accept “like”, “favorite”, “heart”, “save”, and more. It’s also going to need to understand the word “that”. For Siri, “that” can mean a lot of different things. Here it’s critical it means “the thing we were just talking about”. It also needs to ignore “that’s funny.”

What happens if Siri doesn’t understand? Well, at first Siri should probably break out of context to see if there are any alternative means of fulfilling the query. If not, Siri already has error handling, she says, “I’m sorry, I don’t understand”, or some such euphemism.

Back in the narrative, we’ve starred the tweet. Siri either continues to read the tweets automatically or needs to be re-engaged by us. Let’s be explicit, “Siri, resume reading the tweets.” “Resume” or “continue” should always restart the previous task. Siri moves on to the next tweet, but by this time we’re bored. We say, “Read tweets from my sports list.” The keyword “list” needs to be interpreted as a Tweetbot command. The name of the list needs to be processed, but at this point, we’re right back where we started. Even a slight variation, however, could have radically different results. What if we said instead, “Read tweets about sports”? In that case, Tweetbot might query the Twitter API for the tag “sports” or it might even have a dictionary of sports-related terms if the data were pre-structured.

Reality.

Voice-driven user interfaces were fantasy or science fiction at best. Now, we have one that works reasonably well within a narrow enough context. Even better, Siri is available on the computer we carry with us all the time rather than the one sitting on a desk. Yet, for now, the magic actually takes place not on this pocketable device but instead on battalions of servers in a distant data center. The delays we experience while using Siri are crucial. Audio files of the sounds recorded by Siri as we uhm and uhh our way through asking her to do us a favor need to be shipped across the Internet, processed into her best guess at the words we intended to communicate, submitted to her vast database for comparison with all possible ways we could have asked her assistance, and, eventually, offered back to us as a discrete action she is able to take on our behalf.

That Siri works at all is a tribute to modern advancements in processing strength, power consumption, and network speed and ubiquity ((Or have we now moved from ubiquity to invisibility?)). That Siri is not yet the omnipresent, omniscient, omnicapable Computer of Star Trek is in all likelihood a difference in scale not kind. It is not unthinkable to imagine a future only a few years from now in which a device the size of the iPhone can remove the quirks and sources of friction we currently experience. With better batteries, more storage, faster processors, smarter algorithms, and speedier connections, it may not guaranteed to happen, but who will deny the realistic possibility?

This is a revolutionary interface. We’re not going to get by using our hard-earned graphical instincts. The Herculean task facing Apple is educating developers on how to write a Siri service. Making Siri work with Apple’s internal services was no doubt difficult—as evidenced by the frequent down time and the relatively few available features. Enforcing this level of conceptual change on external developers is almost unimaginably hard. It may not even be possible. Apple may decide to keep Siri in-house indefinitely, slowly expanding the available services. I could live with that. It already makes my life much easier in many ways. But I know we’re all just dying to see the full potential realized. For that to happen, Apple need to unleash this force by enabling third-party development. The only way this works, however, is to conceive of it as a completely separate interface not handicapped (or propped up) by the existing iOS interface paradigms of a home screen, little icons representing applications, gestures and the rest. The new interface is the Siri voice and what can be shown within the Siri application. Applications are now simply services of Siri. And Apple is going to need to drill the concepts of VUI into developers who have never dreamed of such a thing. Remember the HIG? That’s going to be big again. Just like the release of the Macintosh required developers to learn and accept GUI principles, Siri redefines what it means to use a computer, and that means grokking VUI from the ground up. ((I have chosen to focus on what I believe Apple may have in store for Siri and, also, what the perfect voice user interface looks like. It’s entirely possible that many good or at least interesting VUIs could be designed to supplement the traditional graphical user interface. Unfortunately, companies can generally only really go in one public direction at any given time. Perhaps Google, Microsoft, RIM, and HP can take up the gauntlet for bringing innovative voice features about in other ways.))

Chat Simply Icon for Fluid

An online chat service? Sounds like a Fluid app to me. I whipped up a quick PNG for use as a Fluid icon. Doesn’t look half-bad. Now if only I had someone to talk to… add ‘nate’ and ‘jay’ on Chat Simply.

Here’s the icon: Chat Simply icon for Fluid.