img

Ondřej Texler

Founding Research Scientist
  Drip Artificial

PhD graduate       CTU in Prague


Short Bio

I am a founding research scientist at Drip Artificial where I lead the research efforts into generative AI, diffusion models, synthesizing stylized videos given a text description, and propagating edits through the video. I obtained my PhD in Computer Graphics at CTU in Prague under the supervision of prof. Daniel Sýkora. I hold BSc and MSc degrees in Computer Science from the same university. My entire research career has been revolving around generating realistically looking content given certain conditions. During my PhD I focused on research into style transfer -- generating realistically looking paintings and animated movies. I was fortunate to do two internships at Adobe Research, one at Snap Research, and one at Samsung Research America; which resulted in 8 publications and winning the Best in Show Award at Real-Time Live at SIGGRAPH 2020. Upon completing my PhD, I joined NEON at Samsung Research America as a senior research scientist where I spent more than 2 years focusing my research into computer vision techniques to generate virtual humans with a particular emphasis on photorealism.

Research Interests

Diffusion Models
Computer Vision
Generative AI
Computer Graphics

News

  • [04/2023]: I left Samsung and assumed a permanent position at Drip Artificial.
  • [02/2023]: I joined Drip Artificial as an external research advisor.
  • [03/2023]: Our Synthesizing Photorealistic Virtual Humans paper has been accepted to CVPR 2023!
  • [10/2022]: My National Interest Waiver Green Card application got approved!
  • [07/2022]: We filed 3 patents related to generating photorealistic virtual humans!
  • [01/2022]: I have been granted an O-1 U.S. visa in the science category.
  • [10/2021]: I have been awarded the Joseph Fourier Prize for my research on Style Transfer!
  • [07/2021]: I gave a talk at SIGGRAPH Now 2021!
  • [06/2021]: I was invited to give a talk at 2d3d.ai!
  • [04/2021]: I successfully defended my PhD thesis!
  • [03/2021]: I moved to California, and started a full-time position at NEON, Samsung Research America!
  • [02/2021]: Our paper FaceBlit: Instant Real-time Example-based Style Transfer to Facial Videos has been accepted to i3D 2021.
  • [11/2020]: I gave a talk for BBC News Arabic regarding our latest research.
  • [10/2020]: Our paper StyleProp: Real-time Example-based Stylization of 3D Models has been accepted to PacificGraphics 2020, Wellington, New Zealand.
  • [8/2020]: We presented our paper Interactive Video Stylization Using Few-Shot Patch-Based Training at SIGGRAPH2020 full paper session, as a short oral at ECCV2020 Deep Internal Learning workshop, and at Siggraph RealTime Live, where we have won the Best in Show Award!
  • [6/2020]: Recently, I defended "a proposal of a dissertation thesis" and passed a Doctoral State Exam. I plan to finish my PhD in early 2021.
  • [5/2020]: Our paper Interactive Video Stylization Using Few-Shot Patch-Based Training has been accepted to SIGGRAPH 2020.
  • [4/2020]: For the rest of the year 2020, I am joining Samsung Research America, the NEON team!
  • [12/2019]: Our paper Arbitrary Style Transfer Using Neurally-Guided Patch-Based Synthesis has been accepted to Computers & Graphics journal.
  • [4/2019]: For Summer/Fall 2019 I will join Snap Research at Santa Monica, Los Angeles, CA.
  • [4/2019]: Our paper Stylizing Video by Example has been accepted to SIGGRAPH 2019, Los Angeles, CA.
  • [4/2019]: On 5-May I am presenting Enhancing Neural Style Transfer using Patch-Based Synthesis at Expressive 2019, Genoa, Italy.
  • [3/2019]: On 6-May I am presenting Fast Example-Based Stylization with Local Guidance at EuroGraphics 2019, Genoa, Italy.

Selected Patents

  • Hierarchical Creation of Visual Data for Generating Images of Human Faces
    • O. Texler, D. Dinev, A. Gupta, H.J. Kang, A. Liot, S. Ravichandran, S. Sadi
    • Patent No. US17/967868, June 2022
  • Creating Talking Animations from Visemes Audio Features
    • S. Ravichandran, A. Liot, D. Dinev, O. Texler, H.J. Kang, J. Palan, S. Sadi
    • Patent No. US17/967872, June 2022
  • Disentanglement of Modalities Through Augmentation for Generating Virtual Avatars
    • S. Ravichandran, D. Dinev, O. Texler, A. Gupta, J. Palan, H.J. Kang, A. Liot, S. Sadi
    • Provisional App. No. 63/359,950, July 2022
  • End-to-end System for Synthesizing Talking Virtual Human Avatars
    • D. Dinev, O. Texler, S. Ravichandran, J. Palan, H.J. Kang, A. Gupta, A. Unnikrishnan, A. Liot, S. Sadi
    • Provisional App. No. 63/436,058, December 2022
  • Architecture for Using 1D Inputs in Image-2-Image Translation Networks
    • H.J. Kang, S. Ravichandran, O. Texler, D. Dinev, A. Liot, S. Sadi
    • Provisional App. No. 63/436,211, December 2022

Talks & Interviews

  • SIGGRAPH Now 2021, invited talk  
  • 2d3d.ai, invited talk, 2021  
  • BBC News Arabic, interview, 2020  
  • RealTime Live!, session at SIGGRAPH 2020  
  • ECCV 2020, short oral  
  • SIGGRAPH 2020, paper session  
  • Expressive 2019, paper session
  • EuroGraphics 2019, paper session
  • CESCG 2018, paper session

Ongoing Projects

img

Prompt-based Video Stylization

Diffusion and generative models to synthesize stylized videos given a text description; including style transfer, video synthesis, propagating edits through the video sequence, implicit neural video representations. I am looking to establish collaboration with university research labs, PhD students, and potential PhD research interns!

Drip Artificial App Store
img

Publications

img

Synthesizing Photorealistic Virtual Humans Through Cross-modal Disentanglement

S. Ravichandran, O. Texler, D. Dinev, and HJ. Kang

IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023

Project Page Paper Supplementary PDF BibTeX
CVPR video/project page [Only registered CVPR attendees]
img

FaceBlit: Instant Real-time Example-based Style Transfer to Facial Videos

A. Texler, O. Texler, M. Kučera, M. Chai, and D. Sýkora

In Proceedings of the ACM in Computer Graphics and Interactive Techniques, 4(1), 2021 (I3D 2021)

Project Page Paper Supplementary Video Presentation BibTeX
img

StyleProp: Real-time Example-based Stylization of 3D Models

F. Hauptfleisch, O. Texler, A. Texler, J. Křivánek, and D. Sýkora

In Computer Graphics Forum 39(7):575-586  (PacificGraphics 2020)

Project Page Paper Supplementary Video BibTeX
img

Interactive Video Stylization Using Few-Shot Patch-Based Training

O. Texler, D. Futschik, M. Kučera, O. Jamriška, Š. Sochorová, M. Chai, S. Tulyakov, and D. Sýkora

In ACM Transactions on Graphics 39(4):73  (SIGGRAPH 2020), Best in Show Award at SIGGRAPH Real-Time Live!

Project Page Paper SIGGRAPH Talk GitHub Supplementary Presentation BibTeX
img

Arbitrary Style Transfer Using Neurally-Guided Patch-Based Synthesis

O. Texler, D. Futschik, J. Fišer, M. Lukáč, J. Lu, E. Shechtman, and D. Sýkora

In Computers & Graphics 87:62-71  (January 2020)

Project Page Paper GitHub BibTeX
img

Stylizing Video by Example

O. Jamriška, Š. Sochorová, O. Texler, M. Lukáč, J. Fišer, J. Lu, E. Shechtman, and D. Sýkora

In ACM Transactions on Graphics 38(4):107  (SIGGRAPH 2019, Los Angeles, California, July 2019)

Project Page Paper Supplementary Demo BibTeX
img

Enhancing Neural Style Transfer using Patch-Based Synthesis

O. Texler, J. Fišer, M. Lukáč, J. Lu, E. Shechtman, and D. Sýkora

In Proceedings of the 8th ACM/EG Expressive Symposium, pp. 43-50  (Expressive 2019, Genoa, Italy, May 2019)

Project Page Paper GitHub Interactive Supplementary Presentation BibTeX
img

StyleBlit: Fast Example-Based Stylization with Local Guidance

D. Sýkora, O. Jamriška, O. Texler, J. Fišer, M. Lukáč, J. Lu and E. Shechtman

In Computer Graphics Forum 38(2):83-91  (Eurographics 2019, Genoa, Italy, May 2019)

Project Page Paper GitHub Supplementary Presentation Unity3D Asset BibTeX
img

Example-Based Stylization of Navigation Maps on Mobile Devices

O. Texler and D. Sýkora

In Proceedings of the 22nd Central European Seminar on Computer Graphics.  (CESCG 2018, Smolenice, Slovakia, 2018)

Paper Presentation

Education

  • 2018 ‒ 2021
    Doctoral degree study (PhD)

    Computer Graphics,
    FEE, CTU in Prague.

    Dissertation Thesis: Example-based Style Transfer.

  • 2016 ‒ 2018
    Master degree study (MSc)

    Computer Science,
    FIT, CTU in Prague.

    Master Thesis: Digital Image Processing and Image Stylization.

  • 2012 ‒ 2015
    Bachelor degree study (BSc)

    Computer Science,
    FIT, CTU in Prague.

    Bachelor Thesis: Architecture design and implementation of a large software system.

  • 2004 ‒ 2012
    High school

    Mathematics, Physics, and Descriptive Geometry specialization, Gymnasium of Christian Doppler.

Professional Experience

present
Drip Artificial, San Francisco, California

Research & Development. Leading the research efforts into creating an end-to-end Generative AI framework that allows for creating stylized videos and animations based on a text prompt; in particular, text-to-video synthesis, example-based video style transfer, and propagating edits through the video sequence.

Samsung Research America, California

Research & Development. Research and implementation of computer vision and deep learning techniques to render photorealistic virtual humans, focusing on faces. Involved conditional GANs, image-to-image translation networks, deferred neural rendering. Part of the NEON team.

Samsung Research America, California

Research & Development. Research and implementation of various image-to-image and video-to-video translation neural networks for face manipulation, e.g., adding makeup, changing skin tone, adding or removing scars or wrinkles. Part of the NEON team.

Snap Inc., Los Angeles, California

Research & Development. Research of new techniques on training generative adversarial networks for style transfer tasks; focused on a scenario where a minimal amount of data is available, and an interactive response is required. Furthermore, developing a shader-based real-time stylization for human portraits.

Adobe Research, USA

Research & Development. Remote collaboration on several research projects, publications, and tech transfer project. Computer graphics; patch-based style transfer; neural-network-based style transfer.

Adobe Research, Seattle, Washington

Research & Development. Combining neural-network-based and patch-based style transfer methods. Chunk-based style transfer method with focus on a real-time performance.

Adobe Research, San Jose, California

Research & Development. Guiding patch-based style transfer method using convolutional neural networks, image harmonization, and histogram optimization. Integrating developed style transfer method into Adobe Photoshop.

Dynavix, Prague, Czechia

Software Architecture & Development. The navigation application for smartphones, tablets, and PND devices. C++, Java (Android), JavaEE, Objective-C (iOS), C#.

World of Warcraft game server, Prague, Czechia

Software & Database Development. The World of Warcraft game server. Extending game mechanics, scripting artificial intelligence, data-mining. C++, C#.

2013