Topaz Labs — 5 Years Qt Development Experience Summary

5 minute Video Version can be found here:

https://www.youtube.com/watch?v=O0K7QeCXMqk

Links to all demos in the description.

Link to the Slideshow: https://docs.google.com/presentation/d/1Rd4Ywcp8urxz1l6OwqTdsD26s3WtTTHwS67w1LNqZco/edit?usp=sharing

Links to demos interspersed throughout the article below.

2016 — Frontend —

Frontend in QML, all sorts of controls, dropdowns, menus, toggle buttons, radio buttons, curve tools, graphs, selection tools, transform tools, masking tools, ellipse tools.

I became very familiar with QML, writing base classes in QML, I tried to use the Composition Design Pattern whenever I could.

Wrote/Edited some small amounts of shader code, GLSL,

Very familiar with throwing around QVariantMaps between C++ and QML,

Signals and Slots, QtObject class etc, using onVariableChanged and other Qt macros in QML

Texture Browser — QML uses a GridLayout, delegates, SQLITE Backend that manages User Generated textures composed of multi-resolution images, mouse interaction

Preset Panels — Collapsible panels, panels that can be composed, resizable panels that contain image galleries

Transform Tools — Ellipse transform tool, rectangle transform tool, masking tool, selection tool

Masking Panel — TabView, with multiple controls in each panel

2017 — DevOps —

Everything Jenkins, Freestyle Projects, Declarative Pipelines, MultiBranch Declaritive Pipelines, Installer Frameworks, Package Maker, NSIS, QTIFW, Bitrock Install Builder, Deployments every two weeks, installer troubleshooting with anti-virus, backup software, operating system updates, runtime issues, path issues, permission issues, multi-program interactions

Challenges — Sometimes a feature that was good enough for BETA would not be good enough for release and have to be removed on release day. This would mean going back through GitHub history and having to remove the feature and then testing to make sure the removal worked, which could be tricky to do. I completely understand why this happens, and it makes perfect sense to me. It’s very understandable because a public release is very large compared to a BETA release, and BETA testers are far more forgiving than the general public.

2018 — App Development —

Topaz Studio project file

This does what a .PSD file does for Photoshop, but for Topaz Studio 2. The user should be able to do work in the program, decide to save program state into a .tpz file, and then load their .tpz to resume working later. It saves all layer image data into a multilayer tiff file using OpenImageIO library and exiv2 library for saving the image metadata for all opened images. It saves all program state into a JSON file using the QJSON and QVariantMap types on the C++ side. A central feature of the engine was that at all times, the state of the program was a data structure that could be easily encoded into json, so this made this project easy for me. If that was not the case, this task would have been a lot harder. One of the first things we tried to do was to make the engine centered around “presets” which were basically just a filter stack, or a tree structure full of image processing nodes that could be traversed

After Effects Plugin for Gigapixel AI

Client Program: The client program was written using Visual Studio 2015, MSVC 2015 v14.0 using the Adobe After Effects CC 15.0 SDK. It’s essential role is to take commands from After Effects and to checkout video frames from After Effects and then write them to shared memory for our Topaz video server to pick up.
Once it has written the requested frames to memory it busy waits via semaphore until the video server let’s go of shared memory and then copies the processed frame back into memory and serves it back to After Effects to be displayed to the user.

Server Program: Written in Qt 5.6. Picks up the frames from shared memory, upsamples them using the Gigapixel AI neural net, and writes them back to shared memory, and releases the semaphore.

Challenges: this project was just complex and large in scope because it involved interaction with 3 programs. AfterEffects, a client program, and a server program.

Other —

Photoshop plugin code maintenance and updates, Adobe Photoshop SDK, Adobe After Effects SDK, Google Crashpad integration,

Build and maintain thirdparty libraries, openimageio, opencv, libraw, openssl, Tensorflow, evix2, lensfun, libjpeg, onnxruntime

2019 — Cloud —

Tensorflow serving prototype —

Tensorflow Serving did not offer what we needed at the time so we decided to roll our own and just use some hand written python scripts to do inference on our cloud servers instead of using Tensorflow Serving.

Web development — bootstrap flex layouts, reactive design, scss, html, js, jquery, CORS

using Postman to troubleshoot REST calls, AWS REST API Getway code, AWS Lambdas, SQS Queues, S3 interaction using Boto in Python3, writing policies for S3 buckets, Using Roles and IAM in AWS, Multiple server, server side inference, if one server goes down, it must fail gracefully, there cannot be duplicate jobs, servers must process efficiently, server setup and configurations, python multiprocessing, Tensorflow, Ubuntu, Using Supervisord to manage long running processes, docker, tmux to run Python processes on different GPUS and the CPU

Challenges: We discovered people would rather buy desktop software once than have to pay for each image or video upload to the cloud service. It was simply not a good product market fit for paid image and video upsampling. People felt uncomfortable uploading their intellectual property to the cloud because a leaks are very costly for eg. movie studios, game studios, etc.

2020 — Junior ML Researcher —

setup training machines, data cleaning and collection, web scraping, wget, BeautifulSoup, Selenium, youtube-dl, Image Quality QA Pipeline with LPIP in Tensorflow,

CNNS, SRGAN —

I have a pretty solid grasp of CNNS and GANs as well as other AI topics. I have also taken Machine Learning and Artificial Intelligence from Professor Vincent Ng at University of Texas at Dallas.

Other —

Python Dataset class — this class was meant to mimic the style of the Tensorflow Dataset class and copied a dataset from a server onto the researchers training machine if needed.

2021 — AiEngine Developer

DevOps pipeline for AIEngine with CMAKE and Conan

Our original AiEngine code was written and compiled in Qt Creator, I replaced the QMAKE build system files with CMAKE files that I wrote myself. We did this so that the AiEngine would not be coupled to QMAKE, which QT was planning to phase out with Qt Version 6.

AiEngine V2 Development and testing

Initial AiEngine development was a proof of concept endeavor. We were not certain that we would be able to add the inference accelerators successfully to replace Tensorflow. Tensorflow was a resource hog on the users machines and also could be unstable compared to inference acceleration frameworks, OnnxRuntime, CoreML and OpenVINO. Tensorflow is more suited for training and research, and not as suited for running inference on user machines.

Some initial pitfalls we ran into — On AMD cards inference was faster for 16 bit models only on newer cards. Until we really looked into this data, we were unable to understand why some users were saying “its faster” and some users were saying “it’s slower”. We were able to establish and maintain a B2B relationship with AMD to help us answer these questions moving forward.

AiEngine integration

Our cadence goal was to do an AiEngine “release” every Friday, which basically entailed, building the current AiEngine code on Windows into a .dll and .lib file, and MacOS into a .dylib file. I did all these builds and made sure that the the other apps were able to pull our aiengine dependency and compile, run, and deploy. Once finished with building and testing I would upload these AiEngine dependencies to Conan

Sprint Planning and Maintaining B2B Relationships:

One of the biggest responsibilities was meeting with Intel, NVIDIA, AMD, Microsoft, and Apple. or otherwise maintaining these B2B relationships. During these meetings, huge lists of todos would be generated, and it was important to keep track of all these deliverables, get them into Asana, do planning, and do sprint planning.

Model Conversion

OnnxRuntime model conversion, Build and maintain OnnxRuntime source build, OnnxRuntime Model Conversion optimization, OpenVINO model conversion, CoreML Model conversion

QA Testing post conversion to make sure image quality and performance do not regress.

Other —

Detect AVX2 Support on the CPU, Logging tool for detecting slowdowns just runs the app and writes down how long everything takes and then writes that into a .csv file which can then be loaded into a spreadsheet

All Blue text in the Slideshow below is a link to a demo I have made to demonstrate the concept, a video of me explaining the feature or project, an article I have written, or some other demo, proof, relevant example, etc. for more details on any subject,

Link to the Slideshow: https://docs.google.com/presentation/d/1Rd4Ywcp8urxz1l6OwqTdsD26s3WtTTHwS67w1LNqZco/edit?usp=sharing

Here are all the links in the slideshow:

This is not a complete set of all the links above.

Frontend:

Collapsible Panel: https://www.youtube.com/watch?v=yLq0_wN4Ho0

Interactive GridView SQLITE backend: https://youtu.be/oO1XAskFYpU

Masking Panel: https://youtu.be/ZvrXXD3EK3M?t=598

2D Map Tool: https://youtu.be/R3DrObPnJiE

Curve Tool: https://youtu.be/B1kEmZ50aEY

MotionBlurs: https://youtu.be/2uY4jrDI7ho

Transform Tools: https://youtu.be/oO1XAskFYpU?t=364

WalkThrough: https://www.youtube.com/watch?v=jKKswaTQYb4

E_FAIL: https://ashley-tharp.medium.com/error-during-installation-process-error-while-extracting-archive-internal-code-e-fail-65e6f44f0f36

permissions: https://help.topazlabs.com/hc/en-us/articles/360040601812 Help Center Documentation: https://help.topazlabs.com/hc/en-us/articles/360040601812

Declarative Pipeline and Blue Ocean: https://medium.com/the-innovation/why-i-love-jenkins-so-much-de20c97fad89

Git LFS Integration: https://github.com/sitting-duck/stuff/tree/master/devOps/git_lfs

Conan Package Manager Integration: https://ashley-tharp.medium.com/error-bin-sh-cmake-command-not-found-on-macos-a24705b14b21

App Development:

metadata: https://help.topazlabs.com/hc/en-us/articles/360039778252

After Effects Plugin: https://s3.amazonaws.com/ashleyntharp.com/after_effects/index.htmlhttps://s3.amazonaws.com/ashleyntharp.com/after_effects/index.html

Web Development: https://youtu.be/kzHZCQUZHR8

Junior Researcher/MLOPs

Web Scraping: https://github.com/sitting-duck/stuff/tree/master/ai/scraping

Selenium: https://ashley-tharp.medium.com/how-to-stay-logged-in-when-using-selenium-in-the-chrome-browser-869854f87fb7

Image Quality QA Pipeline with LPIP in Tensorflow: https://www.youtube.com/watch?v=aUHSjDao5Cg

AiEngine V2 Development and Testing: https://www.youtube.com/watch?v=q9Ql0wsd1Lk

OnnxRuntime Model Conversion: https://github.com/sitting-duck/stuff/blob/master/ai/onnxruntime/convert_loop.sh

OnnxRuntime Model Conversion Optimization: https://www.youtube.com/watch?v=VjUWwrW9EVY&t=47s

Want to hire me? Summary of my recent work can be found on this account. More info at ashleyntharp.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store