Picle

Mar 28

Picle Developments

Last week we had our first Picle strategy meeting since getting back to blighty from SXSW. We have digested a lot of feedback from tweets, emails, reviews, articles, as well as testing it out ourselves, and it was time to step back and take a look at where we’re at.

We started off by evaluating how the launch went and how people have responded to the concept of Picle. Stuart drew this rather rudimentary graph of the potential life span of Picle.

The circle represents our current status.That spike is from the release and the fantastic publicity generated from SXSW. As I wrote in my last post we went to SXSW with 15 users and came back with 30,000. 

 
To put this in a bit of perspective FourSquare arrived at SXSW 2009 with 50 users and left with 5,000 users
So now that we have that initial spike of users there will be an inevitable dip, downloads have dropped off a little since we arrived back, but we have broken the 45,000 download mark. According to Stuart’s graph the app can go 2 ways. 
 
1. We can trundle along steadily making changes in the app improving the user experience and making the whole thing a lot more polished. However, this route is doomed to fail, while the experience may get better the inability to attract new users and expose the app to a new audience will result in Picle fading away into the digital ether. This scenario is represented in the rather upsetting looking line A. 
 
Or
 
2. We stabilise the app and greatly improve the sharing features so that Picle is introduced to new audiences and users. Represented by line B. 
 
Our MVP of Picle has validated the riskiest part of the proposition - that people want to combine pictures and audio and  to string these together to create a story. The level of interest in the app so far and the number of different use cases we’ve seen prove that. As does the fact that people are bothering to give us feedback. As Eric Ries says, if people care enough to tell you your MVP sucks and how you should make it better, then you’re on to something.
 
The next step is harder. We have to start making difficult decisions and prioritise what to do next - which of course also means not doing all of the things we’d like to. As Tim wrote in his last post
 
 Prioritising things is very difficult. Our evolutionary journey has ingrained many instincts that conflict with this Lean approach. We humans all want more than we can eat
 
We have an interim release that is awaiting clearance in the app store now. This release has a number of bug fixes, much improved stability, and some quick win new features such as the ability to share via email and Twitter as well as the much faster tap focus camera Julian blogged about.  If all goes to plan this should go live next week.
 
But we’re already working towards another release with more new functionality. I’d like to share with you what we’re planning for that. With Stuart’s advice in mind, here’s a list of the 3 most important things that we are working on. 
 
1. Develop frictionless sharing to multiple social platforms. A lot of the feedback we have received has been that people want to share seamlessly via social networks instead of having to grab the URL of a story/picle and share that manually. We expected that this would be the case when we developed the MVP and this is a validation of that. 
 
2. Create an introduction page. At the moment the app is a bit sparse on how to use its features. We have been fielding a lot of questions around the difference between picles and stories as well as the different audio recording settings. 
 
3. Create a login/sign up page. To find the signup page at the moment is a bit of a search (it’s in the settings tab if you are still looking). People have been getting 500 & 502 errors meaning that they haven’t yet signed up to the web service, so user’s picles and stories are sitting idle on their phones. We need to develop a login/sign up page so that they can find their friends easily and start sharing their Picles immediately. 
 
We have been working on some early prototypes which flesh out the user journey. I have added some preliminary sketches by Tom to give you an idea of the process we are thinking of. 
 
 
The reason that we have only focused on these 3 things for now is because we want to get the next version of Picle into the hands of users quickly so we can see how those features will inform further releases. We’ve picked out the features we think are the most important to help Picle reach a wider audience and make it clear to new users how to use the app. 
 
I want to stress that the release waiting for approval by the app store does not contain all of the above, but it will include a number of bug fixes, much improved stability, Twitter and email sharing, as well as the improved camera functionality. 
 
If you look at Tim’s iPhone App Icebox from his previous blog you can see that we are working our way through it. The crosses mean completed tasks. The darker circles are the tasks we have bumped up the priority list. 
 
 
We’re keeping the development and design process for Picle as open as possible. We really appreciate the feedback we’ve received so far - please do keep it coming. Let us know what you think of the direction we’re taking with Picle in the comments here or get in touch with us via Twitter.

Mar 22

A tale of two cameras

I’m going to share some of the thinking and code behind the Picle iPhone app starting with the camera functionality. This will involve showing some code snippets and describing classes and frameworks found in the Apple iOS SDK. 
 
At the heart of the the Picle app is the ability to use the iPhone camera and microphone in quick succession. The user experience of this functionality is critical to feel of the app so plenty of work has been done both before and after the initial release. Indeed the next release will have this completely reworked.


In iOS there are two distinct ways to handle the camera. One with the UIImagePickerController (iOS 2.0 onwards) and the other using the newer AV Foundation framework (iOS 4.0 onwards). The initial release of Picle uses the older UIImagePickerController whilst the next release will use the AV Foundation framework.

UIImagePickerController

Those familiar with the iPhone camera will have seen that when the camera is opened or a picture is taken an iris shutter animation is played. As we use the UIImagePickerController in the current version of Picle users will be familiar with the view below.

We lay our own graphics over the top to give us our own controls for the camera. The picker initialisation is shown below. We do the usual Objective-C allocation and initialisation. We set the source to the camera as opposed to the camera roll or library and explicitly turn off the standard camera controls.

 _imagePicker = [[UIImagePickerController alloc] init];
 _imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
 _imagePicker.showsCameraControls = NO;

We can however still control the flash through the cameraFlashMode property. This can be of types auto, on or off. We can also set which camera is to be used through the cameraDevice property, front or rear. We add our own buttons for selecting these options though these don’t apply to the 3GS. These controls sit in custom view on top of the picker view. Displaying the picker is simply a case of calling

[self presentModalViewController:_imagePicker animated:NO];

And is dismissed with

[self dismissModalViewControllerAnimated:NO];

Using the UIImagePickerController is pretty simple and doesn’t require much code to get it to run, however the problem with disabling the camera controls is we lose some functionality like tap to focus and set exposure. We also don’t really want the iris animation so together this became the motivation for rewriting the camera functionality using the AV Foundation framework.

AV Foundation

Using the AV Foundation framework gives us considerably more control over how the camera is configured. We wanted to eliminate the iris, have a faster start and faster capture. The AV Foundation framework enables you get input streams from multiple devices and manipulate video during realtime capture. There is however more code required and one downside is that when taking a still image the video camera is still active. We take a frame grab of the video to place in a view above the video camera while we process the still image. We cannot stop the video until the still image has been captured.

Here comes a snippet of the code to initialise the capture session.

//  Init the capture session
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
newCaptureSession.sessionPreset = AVCaptureSessionPresetPhoto;
 
// Init the device inputs

AVCaptureDeviceInput *newVideoInput =
[[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];

// Setup the still image output
AVCaptureStillImageOutput *newStillImageOutput =

[[AVCaptureStillImageOutput alloc] init];
[newStillImageOutput setOutputSettings:[[NSDictionary alloc]
initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,nil]];

 // Setup the video grab output
 AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutputalloc] init];

 // Specify the pixel format
 [captureOutput setVideoSettings:[NSDictionary dictionaryWithObject:
                                [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

    
 [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; 

 // Use GCD to run a thread to capture video frames.
 dispatch_queue_t queue;
 queue = dispatch_queue_create("cameraQueue", NULL);
 [captureOutput setSampleBufferDelegate:self queue:queue];
 dispatch_release(queue);

 // Add inputs and output to the capture session
 if ([newCaptureSession canAddInput:newVideoInput]) {
     [newCaptureSession addInput:newVideoInput];
 }

 if ([newCaptureSession canAddOutput:captureOutput]) {
     [newCaptureSession addOutput:captureOutput];
 }

 if ([newCaptureSession canAddOutput:newStillImageOutput]) {
     [newCaptureSession addOutput:newStillImageOutput];
 }

In essence what the code above does is to create a single capture session that consists of these:

We have to add the video preview layer to the layer of the view we wish to run the video in and then call [newCaptureSession startRunning] to start the video.

We make the class containing this capture session a delegate of the capture and still outputs so methods get called when each frame is shown and when a still image is taken. A copy of the last video frame is kept so when we take a still we have an image to cover the video input. We also animate a flash effect over the whole view to visually indicate the taking of a still image.

Key to the new camera was also adding tap to focus. The capture device has the properties whiteBalanceMode, focusMode and exposureMode. If the device supports those properties we set all three when the user taps the camera view. We also render a custom focus box to show visual feedback.

Conclusion

The end result of the using the AV Foundation framework is that the camera starts and takes still images quicker than the UIImagePickerController. We also get the tap to focus functionality we required. Clearly there is a huge amount more code involved in using the camera on iOS but I hope this little taste of what we’ve done can at least show the depth development has to go to complete desired functionality. This is also only a small part of the overall app.

References:

http://developer.apple.com/library/ios/#DOCUMENTATION/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController/UIImagePickerController.html

http://developer.apple.com/library/ios/#DOCUMENTATION/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html#//apple_ref/doc/uid/TP40010188

Mar 21

The Architecture Behind Picle App

As with most social online applications, you can think of the data as more like a graph than very simple objects that stand alone. This kind of an application and data is my favourite kind of system to architect and here is why. I warn you, this might get geeky..

One of the fundemental concepts behind Picle is the ability to share your moments and stories with other people, in the same beautiful way that the iPhone app allows you to play them.

We could have acheived this in an Instagram fashion, having just a single page for a story that’s viewable around the web. This however, seems like an opportunity missed. Picle is all about sharing, but perhaps even more important is the community behind it. So we decided to create a fully functioning website, where you can follow your friends, like their posts and share them around the web. This means you get your own picleapp.com login to view a dashboard which is effectively a stream of moments and stories created by the friends you are following.
 

Part 1: The Data Layer

It’s all about relations

As with most social online applications, you can think of the data as more like a graph than very simple objects that stand online. For example, consider the image below, showing users following each other.
 
 
If you were to model this using a traditional relational database using joins or similar (MySQL, PostgreSQL), it wouldn’t be too hard, but you would soon encounter problems and difficulties for the kind of calculations and more importantly the scalablity you want to achieve.
 
In fact, as I have found before, that sort of database has almost zero features that I would require for architecting the data structure for Picle, which include but are not limited to..
 
  1. Scale and redundancy over multiple nodes without losing availability. (i.e be able to take a fifth of your database stack down and still support sufficient reads)
  2. Distrubution of data by adding more nodes, moving and replicating the data around the ring of available nodes.
  3. Advanced queries that can walk many links, perform intersections, etc and reduce the final results to a caluated set which can then be stored (i.e MapReduce).
  4. No global write lock, meaning the system is not affected performance wise by the amount of writes or reads happening concurrently.
 
If you look hard at the previous graph you can now perhaps see a theme occuring, with “Nodes” and “Distribution”. Our data can behave and be modeled just like our databases. Which even leads me to think, that for this kind of application, traditional web stack databases would be the completely the wrong choice.
 
So, how do we store these relations and the values that they need to relate to and what do we store them in.
 
We essentially need to be able to store two types of data, the main “values” and the relationships between them. In Picle the main values are in model terms a “User”, “Moment” and a “Story”. A “User” can have many “stories” and many “moments”. A “User” can have many followers (“Users”) and be following many “Users”. Which then creates a stream of their “moments” and “stories” for the user to view.
 
Before we look at the database, let’s view this in a pure computer science like way and just create it in data structures. Using the structures you already understand. Arrays and Objects.
 
In the following diagram, grey items are Objects and orange objects are Ordered Arrays.
 
 

Storage Engines

What we’re looking for then is something that allows us to store the following.. 
 
  1. Stored ordered arrays.
  2. Key/Values (perhaps in a hash format, instead of JSON)
Ideally it would also be (mostly) persisted, available and tolerate node failures, this is refered to as the CAP theorem, stading for “Consistency, Availability and Partion Tolerance”. A theory first conceived by University of California, Berkeley computer scientist Eric Brewer. The theory was famously first put into practise by Amazon in developing the Dynamo database, which powered much of their site, infrastructure and is most notably the system behind S3. The theory states that not all three of these laws are possible at once and that a tradeoff must be made. So, onto choices..
 
  1. Amazon DynamoDB (recently released, very promising, somewhat expensive)
  2. Riak (based on the Dynamo paper, open source and supports multiple storage backends, automatic handling of nodes/partitions and replication of data/hashing)
  3. Redis (semi persistent, extremely fast, cluster not yet here/on the way)
  4. Cassandra (built by Facebook, table like, built in distribution, uses a confusing protocol)
Unfortunately, none of these currently meet all requirements for me personally (which will come in part two), so I have used a mix of two. One of them might suprise you, but which I also plan to change..
 
Picle runs using Redis to store all relationships and MySQL as the key/value store. Why MySQL? Well, I planned to build the project entirely on Riak but had a lack of time and didn’t have the time to build the model layer that includes validations, error messages and form generation in either Ruby or Javascript. So, for now, MySQL is used as a pretty much pure key value store. This does have some advantages, backups, simple queries without writing map reduce (not really a big gain) and the use of AWS RDS.
 

Conclusion

With MySQL in a state that would be easy to replace with any key value store, I’m in a good position. MySQL performs very fast under these conditions (around 50,000 ops), as there isn’t a single join and every lookup key I use is on a secondary index. But I will be in trouble when I come to move to two servers, Riak will be in use by then..
 
In part two, I’ll move on to how you would go about writing an app with this architecture and include sample code. I’d also be extremely interested in what other people think about this type of social scaling and different tools for distributing the data layer.
 
In part three, I will be going into creating APIs for this kind of data and how to optimize your API to give levels of information that can help build everything an app would need in only one or two API calls, I’ll also go into some detail of paginating this kind of data.

Mar 18

Understanding picle (with apologies to Scott McCloud)

picle’s a bit like a comic. As @malbonster said, it’s more dreamlike than video. I think part of that comes from the spaces between the frames into which you have to insert your imagination. That was a theme of Scott McCloud’s amazing graphic explanation of the history, meaning and art of the comic, Understanding Comics. Here’s his illustration of the design decisions a comic artist makes. It’s a remarkably good guide to how to make a great picle, too. 

Here’s another extract from the book; it shows one way in which picle isn’t (yet) completely like a comic, whatever some people-who-shall-be-nameless are doing with cats with laserbeam eyes

Mar 16

Picle: what next?

So, back from another SXSW :(

It’s great to be back in a country where the butter doesn’t contain sugar by default, where fizzy drinks don’t come in buckets and where you can buy meat in units of less than one pound. But… I will miss adding cheese and chipotle oak-smoked maple-flakes and sugar-coated tequila-bacon dust to my deep-fried food. If we hadn’t got out when we did I would not have fitted through the doorway of the plane.  

It was another awesome trip for Made by Many for many reasons - not least because of the way people embraced Picle, the iPhone app we launched

We promised to make Picle an open experiment in Lean product innovation. To that end, we approached SXSW as a giant customer development exercise. This blog post is about how we prioritised the feature-set that we launched with, and how we might grow and evolve from that Minimum Viable Product.

In particular, I wanted to start by explaining why the features above were prioritised ahead of some of the seemingly-obvious must-haves like social sharing.  

Let’s be super-clear, the Picle app and companion web service/website we launched last week were about as minimal as you could possibly get - but that’s the point. 

The feature-set above was the absolute least that we could launch something with in order to get useful, actionable feedback about where to go next.

Prioritising things is very difficult. Our evolutionary journey has ingrained many instincts that conflict with this Lean approach. We humans all want more than we can eat - it’s not just Texas! 

We’ve tried very hard over the past decade* to overcome these unhelpful instincts when making new stuff. What we’ve found useful we’ve stolen and syncretised - our approach is an ongoing experiment but you could describe it as the bastard love-child of AgileLeanVisual Thinking and Service Design. We’re like magpies - nicking tools and ideas and rolling them into an emergent set of practices. 

At SXSW we went to an event at Adpative Path’s offices where Made by Many’s @saulpims talked about this. The high-priest of Lean, Eric Ries, introduced the evening with a suitably inspiring statement that resonated with the way we’ve approached Picle:

This is a philosophy we’ve been talking since about 2006. This little excerpt from the still-very-excellent Getting Real book by 37 Signals is packed with gems that neatly encapsulate the approach, for instance:

Skip forward to 2012, and we’re using that approach in conjunction with a live feedback loop from real people - in this case, the thousands of comments, ratings and reviews, tweets and conversations we’ve had with real people about Picle over the last week. 

We deliberately launched Picle without features like social sharing. We deliberately built less, because the leaner you are the easier it is to change.

To quote 37 Signals again:

Less mass lets you change direction quickly. You can react and evolve. You can focus on the good ideas and drop the bad ones. You can listen and respond to your customers. You can integrate new technologies now instead of later. Instead of an aircraft carrier, you steer a cigarette boat. Revel in that fact.

So, here we are - lining up the next releases and weighing up what to add and how the vision has evolved. Instead of  trying to guess, or simply indulging our guts, we’ve got loads of evidence. We want to do this in the open, and so we’re going to share.

Starting with the iPhone app first then (remembering that we also launched a web service and Picle website, for which there is a separate MVP and Icebox) - the diagram below shows the launch release/MVP feature-set, and a bunch of features that we’re thinking of buidling in subsequent releases. The future features are labelled ‘Icebox’ for obvious reasons. But please also bear in mind that we are also applying fixes and optimising functionality that’s already live - both of which have been ongoing since before SXSW and will likely make a new release in a couple of weeks . The darker circles are also already in the next release.

It is a law of nature that you will always have more ideas than you have time to make, but the point is that the things on the left - in the MVP - were prioritised because the things on the right probably wouldn’t make sense without first having them in place (for example, there’s no point in building sharing in before you have something to share).

Of course, it’s going to get more tricky, as we’re going to have make decisions about which direction to take Picle in first. In this, we shall be led by customer feedback and validation above all.

To quote The Eric again:

So, on to the web service and Picle website.

Have you seen it? You have to register through the iPhone app and then you can log in. At the moment it’s still very basic - we rushed to get it out the door in time for SXSW - but it’s already starting to look pretty good.

The first thing to say is that we’re making daily releases of new stuff to the web service - there’s no app store submission process. But we still have an icebox or backlog of candidate features we might like to add. Obviously, some of these would synchronise with new directions in the evolution of the iPhone app.

The diagram below shows the core functionality we bundled into the initial release, and the features we’ve got on ice as backlog. The same point as above re: fixes and optimisation obviously applies (think error messages for example). And again, the darker circle has already been done but not yet deployed.

And finally, the *really* difficult bit

Let’s be really honest - we’re a small product innovation and service design company with clients. Our model is to help clients make their own product and services and we’re not set up to launch and operate our own. We made Picle in our spare time and launched it for a bit of a laugh at SXSW, and because it felt like it was worth taking a punt. The question of whether agencies can make products, or should act more like start-ups, is a hot one precisely because no-one really knows how this might work: we are in uncharted territory and Picle is a leap in the dark.

Perhaps the most difficult challenge we face in the coming weeks and months is to maintain the momentum and commit the required attention and resource to make Picle as big a success as we believe that it can be. This will be tough.

But this is why we put something out there in time for SXSW.  Alex had a great idea that we wanted to validate, and the feedback is positive: there is something good about being able to create and share little story objects from moments that are captured on a mobile phone with a photo and soundclip.

Picle allows you to make things that are more like dreams than documentaries. For me, the app’s power is to challenge the idea that video is always the answer.  I think @conordelahunty may well hurt me for saying this but Picle seems to leave a lot more space (dare I say hackable space?) for your imagination. It’s closer to a comic strip than a film, and it’s part of an ecosystem of storytelling and processing apps that - for example - power Instagram. 

We have been overwhelmed by the quality and usefulness of feedback we’re getting from every direction. Please don’t be shy in coming forward with your thoughts. 

———-

* Made by Many is a collection of teams and individuals who have worked with each other at different times and in different places over the past 10-12 years. We came together in 2007 to do this under one roof as a product innovation and service design company that applies start-up thinking to create transformative business outcomes for clients. 

Picle ♥

After a week of BBQs, speeches, panels, margaritas, putting names to faces and more BBQs I’ve landed back at Made by Many HQ and I want to share some of the Picle love.

When we launched as a Minimal Viable Product last week we weren’t sure what to expect and how people would react to the app. A week has passed and we’re delighted with how well Picle has been received. 

We went to SXSW with 15 users and we came back with 30,000. 

There has also been a lot written about Picle in articles, reviews and posts. Here’s a select choice of some of them. 

The response on Twitter has also been great, with Apple’s App Store account promoting the app, adding to the hundreds of mentions, retweets and pieces of feedback. We even managed a few sightings of Picle in the wild at SXSW. 

Its not all gravy though, as we know. Picle was released as a Minimal Viable Product so we are improving as we go along. Tim is going to post some more updates on developments and next steps for Picle, but for now I want to thank all of you out there using Picle, feeding back and sharing the love. 

Mar 10

The story of Picle, in its inventor’s own words

Alex Harding is an Interaction Designer  at Made by Many and the inventor of Picle. This is the story of Picle in Alex’s own words - a story already blogged on this website, but told through a beautiful video that Alex and Paul Wyatt made last week.

Although it’s Alex’s brainchild, Picle has of course been - literally - Made by Many. The extraordinarily talented team includes our brilliant iOS developer Julian James, Alex Barlow who built the web service and website, as well as interaction and service designers Tom Harding, Adam Morris and Conor Delahunty.

Mar 06

Picle is here! (but remember it’s a beta)

Last week we announced the forthcoming release of an iPhone app for SXSW called ‘Picle’. The reception was overwhelming, and we’re really excited to announce that Picle is now available in the app store.

But, hold on… before you run off and start Picling there is something we must remind you of…

Just to remind you that this is very much a Minimal Viable Product, and not a fully-featured product. The app we’re launching today is just a starting point, and we’d love you to become a part of the Picle story, by providing feedback and suggestions that will help shape the future direction of the app.

We’ve put together this slide deck to outline “the rules of Picle” because we were so blown away by the excitement when we announced it, and just wanted to remind everyone - once again - that this is just the beginning, and that we expect there to be some performance issues.

Picle
View more presentations from Made by Many

Happy Picling! We really hope that you enjoy it and we’re all looking forward to hearing what you think of it and how we can make it better. Thanks to those who have already been so kind about it.

May 09

What would Instagram sound like?

Instagram has changed the way I look at photography, from taking single images of beautiful found ephemera to sharing sequences of images as an event or moment unfolds. These moments become a journey through your life, one that is both shared and intimate (as @malbonster mentions in his recent blog post; Making sense of life through photography).

This made me think about the way that photography has evolved and integrated into our lives. It also made me wonder how literal photographic journeys could evolve. How could the day of a social group be documented though more than just a camera lens? To capture more than just one media (or sense)?

This led to an experiment: could the ‘development’ of a photographic journey be through the addition of sound bites? What would the experience become, would sound enhance or disrupt the imagery?

A bank holiday at the seaside seemed like the perfect opportunity to test the idea, and using a combination of Instagram and voice memo, I documented the day. I matched the Instagrams with the sound bites and rendered it as an almost Pummelvisionesque video.

I wasn’t sure exactly what I’d make of the final video, on the day it seemed slightly ridiculous, I think mainly because when you’re actually in the moment, you forget you’ll forget. But, what I found looking back was that the photos seemed to take on a new dimension, they come to life, allowing you to remember and relive as if you were there, which the alternative, video, can sometimes overcomplicate. It’s the halfway point that allows you to document the best bits with the pro’s of each medium. Whether the journey works as an idea, I’m not sure, but as an addition to individual images, it could be interesting?