Clever things with math


Advertisement
Canada's flag
North America » Canada » British Columbia » Vancouver
August 10th 2011
Published: May 19th 2012
Edit Blog Post

WebGL forum





Today was entirely dedicated to Siggraph.

My first item was the WebGL forum.

WebGL is a standard from the same group that produced OpenGL (see Explorations) for displaying graphics in web browsers, and it uses much of the same technology.

Most web graphics currently are done with proprietary browser add-ons like Adobe Flash.

Many developers hate this, for several reasons.

Most importantly, proprietary standards are controlled by a single company, which they can change at any time to promote their interests.

Microsoft’s tight integration between Windows and other software shows the problems this can cause competitors.

Also importantly, the browser plugins are often large and sap performance.





These issues motivated people to find a way to display graphics directly in a browser.

The solution was a Javascript wrapper that can run OpenGL shaders.

This became the basic technology of WebGL, which is now part of the HTML 5 standard.

All browser companies, with the notable exception of Microsoft, now support it (Explorer users currently need a third-party plug-in).

Google has been a particularly enthusiastic supporter, almost certainly to move people away from Adobe Flash.





The first part of the forum discussed latest features.

WebGL requires few enough resources that it can run on mobile phones.

A number of presentations talked about optimizing projects for this platform.

The remainder was a series of demos of projects people have done in the system.

Google Maps is now in WebGL, along with Google Body.

The prize for most amazing project goes to Chrysaora by Aleksander Rodic.

A graduate of the Savannah College of Art and Design, he created a website of jellyfish floating in the ocean just to learn the standard.

The result was incredibly lifelike, looking like a scene out of Avatar.





After the presentation, I dove into paper presentations.

In order to work in the field, I need to understand papers.

As noted yesterday, I like to work on software I find interesting.

The papers show what researchers consider the most interesting problems.

I also need to know whether I can follow the often heavy math involved enough to implement it myself.


Shape Warping



The first session I chose contained a number of papers on shape warping.

This is a very old research area in computer graphics.

It comes up often in animation.

Artists and modelers create some geometry, represented as a 3-D mesh of triangles, and then want to change it.

Moving every piece manually is really tedious.

The solution is to specify some change to the model by various methods, and have the computer figure out the details.

Ideally, this is all done in real time so the modeler sees the changes as they do them.





The first paper dealt with modifying models with handles, Bounded Biharmonic Weights for Real-Time Deformation by Alec Jacobson, Ilya Baran, Jovan Popović, and Olga Sorkine.

The artist attaches a virtual tool to the model, then moves it around, and parts of the model move to match.

Different handles have different effects; some will twist the model while others move a chunk as though it was in a box.

Ideally, the designer uses multiple of them at the same time.

The paper describes a new algorithm for combining the effect of different handle types, in a way that can be quickly solved on a GPU.

It discusses a series of constraints that the combination formula must satisfy, such as not tearing the model into pieces.

These constraints are translated into equations.

The paper then shows how the chosen formula satisfies those equations.

The equations chosen give a handle a large effect near the attachment point and progressively less farther away.

They can be solved in real time on a GPU, meeting the important goal of interactivity.

(Incidentally, this was the paper with the crocodile animation during the fast forward session)





The paper was very math heavy.

I found the equations by themselves hard to follow.

I tend to reason spatially, by visualizing actual shapes in space.

In this case, I finally visualized the effect the equation would have on the samples they provided and understood what was happening.

The problem itself is pretty interesting.





Next up of note is a paper on model morphing, Blended Intrinsic Maps by Vladimir G. Kim, Yaron Lipman, and Thomas Funkhouser.

Maps are heavily used in animation.

The animator specifies two models (often called ‘keys’) and the algorithm shows how to convert one into the other.

The result is the map, which show how every triangle vertex moves from one pose to the other.





This area is very popular in geometry research, so a number of methods exist to automatically generate the map from the two models.

They all have the basic idea of matching up specific points and then searching for a map with the best fit for those points with a given set of properties (preserving features, no tears, etc.).

The full solution requires testing every possible map for the points, which takes far too long (the problem is “non-polynomial”, meaning the number of tests scales exponentially with the number of points).





The key insight of the paper is that different types of maps work better on different features within the model.

They took a list of simple maps (the intrinsic maps) and created a way of combining them.

Each transformation is assigned a weight that emphasizes a local area, much like the previous paper and handles, and then combined.

The paper describes how to calculate the weighting functions automatically so they have required properties like not collapsing parts of the model to a single point.





This paper was incredibly math heavy, even more than the last one.

I had real trouble visualizing all of it, due in part to lack of specific background knowledge.

The problem they solved is interesting.





The last paper was very different to the other two; Photo-Inspired, Model-Driven, 3D Object Modeling by Kai Xu, Hanlin Zheng, Richard (Hao) Zhang, Daniel Cohen-Or, Ligang Liu, and Yueshan Xiong.

This paper discussed a method of turning a digital photo of something into a 3-D model.

The general case is nearly impossible, because the photo lacks depth information and distorts the size of the object.

The proposed solution is to have the user select a generic model from a list, and then automatically distort the model so its outline matches that of the object in the photo.

The initial model is segmented into parts, to ensure the model structure stays constant during the deformation.





This paper emphasizes the algorithm more than mathematical computation.

The biggest parts are on the decisions the algorithm has to make and how it evaluates each choice.

I could follow the write up.

The problem itself was not intriguing however, which meant I didn’t enjoy the solution.


Procedural Modeling



My afternoon paper session was on procedural and interactive modeling.

Two of the papers were on arranging virtual furniture.

Both used virtually the same algorithm to solve two related problems.

Conferences in general rarely accept two papers as closely related as these, a situation the teams themselves commented on during their presentations.





The first paper was on automatically arranging furniture in rooms, Make It Home: Automatic Optimization of Furniture Arrangement by Lap-Fai Yu Sai-Kit Yeung, Chi-Keung Tang, Demetri Terzopoulos, Tony F. Chan, and Stanley Osher.

It’s motivated by online games.

These games require a large number of rooms.

Designing every one by hand is too labor intensive to be practical.

Until now, game designers have either designed a few canonical rooms that are used over and over with different textures on objects, or placed pictures of furniture on the walls to give the look of finished rooms.

Both methods are showing their limits with the high resolution of modern systems.





The algorithm in the paper automates this process.

It takes a well known method in computer science, called a cost function, and applies it to furniture arrangement.

A cost function works by specifying values (the costs) for different parts of a system relative to some variables.

The method then finds the variable values that produce the minimum overall cost.

In this case, the system parts are furniture and the variables are where they are placed in a room.

For example, how far something sits from a wall, and its orientation relative to other furniture, become costs.

The algorithm needs a way of setting up the cost values, which it finds through machine learning techniques.

The algorithm takes a set of properly designed rooms as input, and deduces the cost function from how they are laid out.





With the cost function, the next step is to create a room of randomly laid out objects.

The algorithm rearranges them, attempting to find a lower cost.

This is not straightforward, because a change that reduces the cost contribution from one object can very easily increase it for another.

The only way to find the definitive solution is to test every possible arrangement, another non-polynomial problem.





Thankfully, the cost function algorithm has been well studied in computer science.

A perfect solution can’t be found in a reasonable amount of time, but it can be approximated by an algorithm called the Metropolis-Hastings algorithm, which was developed in 1953.

It finds a small range of test arrangements, and calculates their costs.

It then chooses one, with a probability inverse to its cost.

It does not go for the lowest cost always because this could be one that is lower cost relative to most similar arrangements but much higher cost than arrangements that look completely different (called a “local minima”).

The new arrangement is then used to generate more to test.

At each step, the differences between the current arrangement and the next test arrangements shrink, until a solution is decided on.





The other paper was Interactive Furniture Layout Using Interior Design Guidelines byPaul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, and Vladlen Koltun.

It was motivated by interactive design.

Most people don’t know formal techniques of interior design.

Instead, they move things around until it looks good.

Moving actual furniture is a pain, so programs exist to do it virtually.

The paper presents an algorithm for generating suggestions during the process.

The basic design is the same as the previous one, set up a cost function and then use the Metropolis-Hastings algorithm to minimize it.

The difference here is how they calculate the cost function.

The researchers talked to a number of interior design professionals, and translated the principles they listed into the equations.

The paper has a table of them.

They also need the algorithm to run at interactive rates, so they optimized the solution search for speed.

They also designed the algorithm in a highly parallelized way that can be implemented on a GPU.





Both papers ran user studies on the final room arrangements.

They had people look at the rooms and evaluate them.

The papers go into the statistical techniques used to ensure meaningful results.

In both cases, people liked the rooms.





For both papers, they were a clever application of an existing algorithm to a new domain.

I prefer the second paper in this respect, because their cost function considered explicit aesthetics as well as pure functionality.

I would love to see how a combination of the first paper’s solver implementation works with the cost function from the second paper, compared with what it actually used.

I was able to follow the math, and the algorithms were easy to visualize.





At the food court for lunch, I ended up in line next to people from a major special effects company.

Turns out they are animators.

I asked about the business.

Working in special effects, it turns out, is much like being any other type of film technician.

The business is ruthlessly competitive, with constant pressure to come up with new ideas.

People are only as good as their last work, so they need to constantly update their skills and sell themselves.

On the plus side, seeing what someone did in a finished film is really cool, although nobody outside the business can notice who did what.

I had to ask whether they ever meet celebrities.

Animators meet the occasional director at a meeting, but that’s about it.


Animation Festival Electronic Theater



Tonight, I saw the Electronic Theater screening.

Many filmmakers submit shorts to the festival.

Those selected by the jury end up in different screenings (usually organized by topic) with the best shown at the Electronic Theater.

These screenings are the most popular session at the conference.

During the late 1990s they had all the glamor of a film festival; shown at the type of theater used for movie premieres with hard to get tickets.

These days the screening was in a large room at the convention center, and I got a seat just by showing up (with a badge, of course).





The screening itself contained a large list of shorts, alternating with clips from major feature films.

The latter were all done by major effects houses like ILM, showing how they created cutting edge effects.

The shorts featured a number of student films.





Pixar has entered shorts in the festival ever since their first, The Adventures of Andre and Wally B, back in 1985.

They use shorts in part to explore new ideas and techniques that later show up in their full length projects.

The short this year was La Luna, directed by Enrico Casarosa, which tells the story of a unique Italian American family.

Their livelihood is collecting fallen stars, on the moon!

The newest member joins the family business this night and conflict ensures.

The short explores cutting edge lighting effects, such as what a character looks like lit from below by hundreds of little glowing stars.

The director gave a talk on his film earlier in the conference, which I didn’t have time to see.





The jury hands out a number of prizes each year.

The second most important, the Jury Prize, went to Paths of Hate by Damian Nenow, a short about two jet fighter pilots locked in mortal combat until they turn into skeletons flying to hell.

For me, the film was most notable for its highly stylized art, based on German Expressionist line and color ideas (see Arch Madness).

The longer the battle went, the more stylized the film became.





The best short overall wins the “Best in Show Award”.

The award is pretty significant; the winner earns an automatic Oscar nomination for Best Animated Short.

The winner this year was The Fantastic Flying books of Mr. Morris Lessmore, by William Joyce and Brandon Oldenburg.

It tells the story of a library of living books that are only happy when someone reads and appreciates them.

The books use a number of methods to ensure this happens.

Technically, the film is a tour de force combining stop motion, hand drawn, and computer animation so seamlessly I couldn’t tell which part was which.

(LATE UPDATE) This film later won the Oscar, the second winner from Siggraph to do so.

Advertisement



Tot: 0.259s; Tpl: 0.01s; cc: 31; qc: 96; dbt: 0.1587s; 1; m:domysql w:travelblog (10.17.0.13); sld: 1; ; mem: 1.4mb