Adding to Weburban’s current lineup of applications is a photographic simulation of the plastic cameras people loved to hate in the 1960s. Light Leak 1.0 for the iPhone produces both color and black & white simulated film images with edges that look like light has leaked onto the film, producing jagged borders.
In reality, the pictures are put through an algorithm to produce randomly generated pixel perfect bleeds that punch up contrast by multiplying pixel intensity, especially along the borders of the image. This makes pictures with a large amount of contrast a lot more jagged and bleeding. Images with little or no contrast will look faded and torn.
A large degree of research went into the production of Light Leak, including techniques involving film emulsion transfer, film development using plastic cameras, pin-hole film development, and infra-red film development. Careful study into the effects of light and its gradual attenuation were applied to prevent the images from looking fake, over-processed, or digital. This was particularly challenging since the iPhone’s camera does not allow for exposure manipulation during a snapshot, so all of the processing must account for over or under exposed images.
The latest update to the iPhone application lineup has introduced multi-threading to improve stability and overall speed of processing. Since this application is the most intensive to date, multi-threading was critical to maintaining overall performance and stability and was added to all applications in Weburban’s lineup of iPhone applications.
The Natural History Science Museum of Los Angeles wanted to create a large kiosk that many guests could operate at the same time. This would make using a conventional mouse-based kiosk impossible because a mouse has a single point of control. Ideally, the designers at the museum wanted a large screen that can have objects manipulated by multiple control points (i.e. multiple “mouse” pointers) allowing for a shared experience. Cost was also a factor, so 8 foot touch screens were not an option.
An open source movement exists for software libraries designed to manage multiple touch events on an ordinary computer. It works by registering the x and y coordinates of blobs seen via a web cam on a translucent glass screen. These blobs are then sent to the event model as custom touch events and handled by the application. Touchlib is the software we used for our project and the experience was overwhelmingly positive.
The software is released under the Open Source New BSD license, which gives you the possibility to use it and modify it in every circumstance. It can be found here:
One of the main quality attributes of the application was that it had to be easily extensible. Flash Actionscipt 3.0 was the language of choice for the application since staff at the museum already knew the language, and it was the most platform agnostic. This became critical during deployment when the target machines moved from being PCs to Mac Mini computers.
Flash integration was easier than I had anticipated. Once the touchlib library captures the x and y coordinates from blobs registered from the web cam, it sends them via UDP to a port registered by a lightweight java server running in the background. Flash can listen to the port for UDP events and call them as needed. All this is done in about two lines of Flash code:
First, import the touchlib Flash library — import app.core.action.RotatableScalable;
Then, start listening to events over UDP — TUIO.init( this, ‘localhost’, 3000, ”, true );
There is a substantial number of Flash commands and events already written out and stashed in the utility library ready for use. We overloaded some of the behaviors to prevent runaway scaling of images and added some custom behavior to prevent text from being scaled too small. Most of the additonal functionality had to do with inteface animation and user controlled interface design. Since the Flash app is constantly listening for UDP events it is wise not to overload it with other events that are not crucial — be sure to clean up Timer events or EnterFrame events that are not needed.
The iPhone has the ability to triangulate the physical location of the phone using a combination of GPS and Phone tower ID points. Some people have been asking me if it could be possible to find the location of a user’s computer when they view a web page. Since every computer uses a unique IP number, and that number is registered in blocks to internet service providers, then it could be theoretically possible to reverse lookup these numbers to get an approximation as to where they are. You cannot know exactly where the person is as accurately as GPS, but instead get a general approximation. This could have some interesting uses, especially since the cost for this technology is relatively small.
Physical proximity and distance can be calculated using the haversine formula. This will tell us the distance between two points on a sphere. And of course, the earth is not flat, so typical straight line calculations are pretty useless. Thankfully, the formula is quite simple:
First we get the radius of the earth in either miles or kilometers:
R = 6371 km radius of the earth
Next, we calculate the delta or change between the latitude and longitude points:
Δlat = lat2− lat1
Δlong = long2− long1
Finally, we plug those givens into the formula:
a = sin²(Δlat/2) + cos(lat1) * cos(lat2) * sin²(Δlong/2)
c = 2 * atan2(√a, √(1−a))
d = R * c
For the purpose of this experiment, we also need a data table with the values of IP numbers and their proximity to lat long points in the world. MaxMind has generously provided a free data set of city points for this very purpose. After loading the data points and creating a script that pulls the browser’s IP number, the page gets a list of data back. The detail and quality of the data depends on the data set coverage. However, so far in testing, the results are pretty good for the minimal effort put into the coding.
Uses for something like this could include:
– Add immediate locational feedback on web site analytics without log processing
– Provide web site browsers with geographically specific services
– Provide geographically tailored content on front pages
– Restrict content based on geographic location
Weburban LLC is an iPhone, and Android development studio based in the San Francisco Bay Area, California.
Over the span of a week, Weburban staff experimented with real and computer generated inks to come up with a compelling range of borders for images in the new application EdgePix available exclusively on the iPhone app store. Singular streaks, photographic edges, painted brush strokes, and all kinds of smears were used to mask the images.
We wanted people to be able to quickly add clean and gritty edges to their newly captured pictures. Even a simple border will help a FaceBook image look more interesting. We wanted to keep the range of Edges useful enough to have a broad range of applications without watering them down so much that they become boring.
The next iteration of EdgePix is being worked on right now at the Weburban offices. It will use the Weburban Pix 2.0 framework and will allow users to pick the image first and then apply the edge progressively. Also, a new pack of edges will be included free of charge to all current users of EdgePix 1.0.
Apple has made the shiny glass button into an icon for the sleek user interface of tomorrow. Now, with ButtonPix, you can turn any image in your Camera Roll into a shiny glass button with one easy selection.
The button processing algorythmn intelligently detects whether the image is horizontal or vertical, and then applies a series of filters and effects to make the image look shiny, rounded, and glass-like. Processing does not alter the original image so there is no risk in losing pictures during experimentation.
The newest upgrade to the Pix line of applications is GothPix 2.0, the premiere application that brought simulated damage and age to the iPhone. The filter acts as a high contrast, high grain, random texturizer for images taken with the application or off the iPhone’s Camera Roll. All images remain in native resolution for maximum fidelity.
The main draw for images made in GothPix are images that can benefit from a dark, gothic mood. These can be floodlit areas at night, self portraits done in harsh lighting, dense urban settings, or bleak wooded areas. Generally, the filter makes images appear more flat, so adding exaggerated dimensions can make the image more interesting. Artists have used harsh light in urban areas to do this, making landscapes that look like they are out of a science fiction horror movie right on their phone.
The GothPix 2.0 free upgrade has provided several key features. Users have requested that controlling the intensity and splatter were important, because at times, an inconvenient ink splotch has covered the face of an important subject. Also, during conditions where light was not ample, the overly intense contrast in the filter was making the image muddy. This has been resolved with the addition of a splatter control (for on and off) and an intensity slider that tones down the application of the GothPix algorythmn.
Additionally, the application uses Weburban’s new application framework that includes the following features:
– images can come from the iPhone Camera Roll, or taken directly from the camera
– one level of undo can revert an image if the random filter turns out ugly
– images can be processed over and over for stacked effects
– images can be selectively saved
– faster processing speeds for the GothPix filter
Weburban is currently working on 3.0 of the application which will inlcude some exciting new features that will bring it in line with Apple’s spacial geotaging initiative. Stay tuned for more information.