A Lightweight, Procedural, Vector Watercolor Painting Engine

Absract
Existing watercolor-like painting software produces high quality, but required powerful compute hardware and limited to screen resolutions. The paper introduces a new algorithm that enables artists to generate water color like paining in a lightweight manner.

Contibutions

  • Uses a particle-based model of pigment flow, rather than grid-based.
  • the particle representation is vector instead of raster, allowing for rendering at arbitrary resolutions
  • the particle update step is a physically-inspired procedural algorithm that is very fast to calculate
  • achieve common water-color effects such as edge darkening, non-uniform pigment density, granulation, and backruns.

Strength and Weakness

  • Ineractiveness, which is something that previous attempts to create water-color like effects lack or failed to do in a lightweight manner.
  • vector represetaion makes rendering at any resoluion possible with high quality anti-aliasing for final output
  • However, heavy branching, intricate flow, and complex textures are difficult to represent with pure vector formats


Algorithm

  • Adopt a sparse representation for the input paint pigment, and use a random walk algorithm to update its position each time step. (By carefully selecting he model and update equation, we can recreate a variety of interesting behaviors.)
  • The representation of paint pigment as a collection of dynamic splat particles made it possible to express dynamic watercolor paint behavior


Ideas for extension

  • canvas angle change so that we can generate droplets / rivulets of water-color stroke.


Open Questions

  • I would like to see actual implementations of these algorithms

Moxi: Real-Time Ink Dispersion in Absorbent Paper

Abstract
This paper presents a physically-based method for simulating ink dispersion in absorbent paper for art creation purposes.

Contributions

  • develop an ink flow model that can simulate more complex effects than possible with previous work by modifying the basic LBE for the physics of ink flow in absorbent paper
  • implement the ink flow model and a brush dynamics model in a real-time paint system using both the GPU and the CPU
  • develop implicit modeling and image-based methods to enhance the output quality

Strength and Weakness

  • detailed understanding of real ink flow
  • the algorithm is more detailed than the watercolor paper.

Ideas for Extension

  • different effects such as randomly generating droplets after a stroke (since in practice, when a brush contains a lot of water, a droplet can accidentally remain on paper)

Open Questions

  • Would like to see the comparison between this paper and the watercolor paper

RealBrush: Painting with Examples of Physical Media

Abstract
Although conventional digital painting systems use procedural rules and physical simulation to render paint strokes, this paper introduces an interactive, data-driven painting system that uses scanned images of real natural media to synthesize both new strokes and complex stroke interactions.

Related Work

  • Procedural approaches: i.e. Adobe Photoshop
  • Simulation approaches: numerically model the physical interaction between the visual brush, the canvas, and the pigment medium.
  • Data-driven approaches: include such work as modeling virtual brushes, generating brush stroke paths, and simulating pigment effects.

Contributions

  • plausible reproduction of natural media painting, with its many behaviors, tools, and techniques, in a practical and fully data-driven system


Strength and Weakness

  • support a wider array of artistic media
  • gain higher fidelity (able to reproduce strokes that are indistinguishable from real examples)
  • Cannot produce water-color like effects

Ideas for Extension

  • eraser implmentation; should we just implement "undo" function, or can we think of another type of "eraser" in this context of digital media

Open Questions

  • Is there any other "by-product" of this paper such as the figure 18 that manipulates photographs?
  • "graph cuts can be used to find an optimal seam between segments, followed by gradient domain compositing to smooth the transition.": How does graph cuts work here?

徒然:Numentaについて

Numentaは2月くらいにJonathonと一緒に昼ご飯を食べたときに教えてくれた人工知能系スタートアップで、そのときは特に惹き付けられなかった。最近(というか昨日)jungoとリベラルアーツの意義とかそういうたぐいのことを話していた時に『何が役に立って何が役に立たないとか後になって結果をみないと全然わからないのが問題;そういうのが定量的にわかるような仕組みないかな』とjungoが言った。するとたくさんのinput(何が役に立って..の『何』の部分)が複雑に絡み合ってoutput(後になって結果をみないと..の『結果』の部分)を出す絵が浮かんであぁこれニューラルネットだと思い、『なんかそういう仕組みに似たようなのが機械学習界隈にあるよ』と僕が言ったのがきっかけで、backpropagationとdeep learningについてjungoに説明する羽目になった。いつも英語で読んでいるからかうまく日本語の言葉が出て来なくて色々ぐだぐだだったけど、その説明のお陰で、これはまさしく神経科学と機械学習の融合ではないか(というかそもそもニューラルネット自体神経科学のニューロンに関する知見から発展したものなのだけど)と思い、そういえば今はこの辺の学際的な研究はどの程度進んでいるのだろうとの疑問が生じ今日の午後調べていた。色々調べていくうちに以下の事実を発見した。

  • YaleにIntro to Systems Neuroscienceという授業があって、これはPrinciples of Neural Scienceという割と神経科学者にはmust-read(amazon review談)な本を教科書としているsurvey授業。なので来年取ろう!と思った。
  • IBMが開発したWatson
  • 去年10月くらいにtwitterで見かけた(がネーミングが怪しそうなので特に深く調べず素通りした)『全脳アーキテクチャ』(及び全脳アーキテクチャ若手の会)がもろ神経科学+機械学習の話で俄然興味を持った。(あとfbページ見たら高校同期が二人次の3/18の勉強会にgoingしていておぉとなった。)

wbawakate.jp

全脳アーキテクチャ勉強会 - 汎用人工知能と技術的特異点

https://staff.aist.go.jp/y-ichisugi/besom/AIST11-J00009.pdf

  • Numentaのやってることも神経科学+機械学習

numenta.com

Hierarchical Temporal Memory, NuPIC, and Numenta’s Commendable Behavior

Stanford CS294A: 深層学習と教師無し学習

Stanfordには機械学習のイントロ用にCS229というのがまずあってSupervised Learning(LMS, Logistic regression, Perceptron, Exponential Family, Naive Bayes, SVM, Boosting, Bagging), Unsupservised Learning(Clustering,Kmeans, PCA), Reinforcement Learningと一通りさらう。(詳しくは上記リンク先のシラバスに出ている。)課題は8割数学、2割実装っぽい。Assignment1をさらっと見た感じ、今取っているSTAT365: Machine Learning and Data Miningの課題セットほぼここから来てるんじゃないかというくらい問題が似ていた。

CS294Aはタイトル通り深層学習と教師無し特徴学習(Deep Learning and Unsupervised Feature Learning)にフォーカスしたプロジェクト型のコースでサイトによると"The goal is to have each team do a publishable piece of research."とあってなかなかexpectationが高そう。よくよく読むと機械学習のresearch方面に進みたい人達向けに設計された人数制限付きのコースとある。課題は実質1つだけでそのあとは各チームそれぞれプロジェクトを進める模様。

その課題が
UFLDL Tutorial - Ufldl
の一番上のセクションのSparse Autoencoder。春休みなのでやってみた。(PDF)


レクチャーノート

Step1: Generate Training set
10枚の風景画が512 by 512のpixel dataとして与えられているので、そこから8 by 8のパッチをランダムに選びベクトルに変換して学習用データ一つ完成。これを10000個作る。

Step2: Sparse Autoencoder
こちらが本番。レクチャーノートに沿ってやるだけなのだけど、notation-heavyなので何がベクトルで何が行列で何がスカラーかに気を配りながら式を追わないとごちゃごちゃになる。P9にあるように初め10000回Forループを回していたのだけどめちゃめちゃ遅くてバグかと思った。チュートリアルの次のセクションにVectorized Implementationというのがあって、そこにForループを使わないでやればいいんだよ!と書いてあったのでその通りにしたらruntimeは10分くらいで済んだ。
Step3はStep2のコードが正しいかのチェック用。
Step4,Step5では何も新しいコードは書かない。ふむふむとチュートリアルの説明を読む。

以下の画像が生成できたら成功。
f:id:runenoha:20150312023519j:plain

コードはgithubに置いた。(あとで置く)

チュートリアルのrecommended readingを読んだらもう少し付け足す予定。

Reading: "Second Surface: Multi-user Spatial Collaboration System based on Augmented Reality"

MITのSecond Surface.
紹介ビデオがvimeoにあったので貼っておく。

Second Surface on Vimeo

Shunichi Kasaharaさんという日本人の名前がauthorsに入ってた。

Contribution
Up to now (then): A system for creative collaboration requires special equipment and cannot adapt to everyday environment
From now: Second Surface : allows users to place three dimensional drawings, texts, and photos relative to such objects in a collaborative real-time way without special settings


System and Implementation

  • The pose estimation of the user's device allows the user to place the generated contents on the virtual space.
  • Each set of pose data of the user's devices and the generated contents are shared in real-time via the server.
  • Uses image based object recognition using the dictionary data for device pose estimation from the real-world environment

//memo AR companies: Layar, Metaio

Strength and Weakness

  • Implementation is not detailed in the paper; For example, the paper says "Second Surface uses matrix calculation procedure to provide a very natural feeling relative to the physical scale of the real world and the AR content", but the exact way to get the same result is unclear.

-
Ideas for Extension

  • Along with utilizing a cloud data, we could incorporate those collected expressions into an online map such as Google Map.

Open Questions

  • Second Surface seems to be work well with Magic Leap rather than Oculus Rift, since Magic Leap creates 3D images onto the actual world. How can we incorporate the system with Magic Leap?

Reading: "Insitu: Sketching Architectural Designs in Context"

元論文:
http://graphics.cs.yale.edu/insitu/


Contribution

  • A novel approach to representing a complex site, which enables interactive, conceptual design
  • The integration of this representation into a stroke-based sketching system
  • A method for fusing data from different sources--including geographic elevation data, on-site point-to-point distance measurements, and images of the site-into a common coordinate system
  • A discussion about the iterative development of a design system, based on collaboration between computer scientists and designers

Sketching systems for early conceptual design
Sketching: Simple and intuitive: SESAME; Google SketchUp;
3D based: stroke based systems such as 3D6B and Mental Canvas
// Insitu took the style of user interactions used in Mental Canvas as the model
Architectural Design in Context: CAD's detailed final modeling is not appropriate for early conceptual design; rather need simple interfaces and representations
Modeling from Photographs: 1. Thormahlen and Seidel - used the camera positions and rough geometry recovered using bundler techniques to generate orthoimages to guide design
2. Photo pop-ups: combine coarse geometry, textures, and transparency.

Strength and Weakness

  • The only current ways for representing context involve designing in a heavyweight computer-aided design system. Insitu is a novel approach to presenting context that is integrated in a design system by offering interactive tools to acquire, process and combine the geographic/image data into a stroke and image-billboard representation.

Ideas for Extension

  • If we incorporate light setting / shadow setting, this system could be used for landscape survey before constructing new buildings...How the shadow of the new building might affect neighborhood?
  • Combining with Oculus Rift, adding the system which allows users to "walk through" the site that they are working on might be helpful

Open Questions

  • I would like to know the details of computation of a local model of the topography ( 3.1 Site Topography )
  • What is nonmetric MDS? Why does this work?