Thoughts From My Life

Advice from someone who does not know everything.

Artaygo – Artificial Intelligence Generated Artwork

I recently spoke with the owners of Artaygo. The company offers one of a kind, canvas art pieces that are generated using artificial intelligence. It’s a very unique product offering for customers who would like original pieces at more affordable prices. There are multiple themes available and it continues to grow in selection.

The following is a question and answer style interview about the company and how the artwork process works.

Q: Why the name “Artaygo”?  Where did it come from?

The choice of the name Artaygo is a mix of branding potential, ease of pronunciation, availability, and linkage to the content.  Artaygo is primarily a play on words combining “Art” and “AI” in a way that is easy to pronounce. We started with a brainstorming session writing down a few dozen words that might be relevant – words like AI, art, artificial, machine, gallery, generative and so on, and from there played around with variations of them. 

There are quite a few sites that help with generating business names which were helpful at getting ideas for prefixes and suffixes.  But at the end of the day it came down to spending a weekend with Excel open, and generating all kinds of prefixed and suffixed versions and picking favorites.

“The Darkseed” – AI Generated Artwork

Q: What is your interest in art historically?

Although our personal background is pretty focused on finance & capital markets, we’ve done a lot of work in the past in photoshop, graphic design work and just experimenting with other visual arts packages, including 3DSMax, Maya and Blender.

As a homeowner it also became more topical to think about home decorating, framed canvas and other wall art. We have a mix of art types at home, and I’ve often wondered if there was an interesting middle ground between basic printed wall art and more expensive original oil paintings.

Q: When did you get the idea to create this site and what were your motivations?

Our introduction to AI / Deep Learning / Machine learning was in late 2017 and we became very familiar with a lot of the different algorithms that existed back then. Initially it was more out of general curiosity as a bit of a sci-fi nerd. The general concept of AI seemed so far-out versus anything we had seen previously.  Just the idea that you can create a generalized architecture in which a computer can ‘think’ for itself and solve a variety of problems without much human input is completely amazing. And having watched this space now for several years, it’s incredible to see the innovation that’s been happening – much of it just trial and error because there isn’t a lot of academic theory behind it yet. 

In terms of motivations, we really like art, graphic design, and AI personally, plus with a business background, what better way to put that together, but by putting computers to work at creating art? I also think it’s a great service that benefits people, because while everyone aspires to have their own original hand-painted artwork, realistically you’re not going to furnish your first home with multiples pieces of art at $2,000-$10,000+ each.  So for folks getting into their first home – there wasn’t really a way to decorate a house with something completely original, one-of-a-kind, with high quality materials and at a reasonable price. So it’s great to be a part of offering a solution to that problem.

Q: I would love some details on how this works if it doesn’t give away any secrets. What is the algorithm behind the art generation?

The core principal is something called a generative adversarial network (GAN). So you actually create two neural networks which fight against each other. Initially, the generator network really just makes colorful noise – just random guesses for every pixel in an image, while the discriminator network looks at the fake noisy pictures, and looks at real pictures, and assigns a probability of the images being real or fake. Initially, neither the generator nor the discriminator has any idea what it’s doing and is pretty inaccurate. Then the results are scored, and the generator is rewarded for fooling the discriminator, and the discriminator is rewarded for correctly determining if an image is real or fake.  So this process then repeats and the act of rewarding the networks causes them to improve over time.

Q: If it is being trained, how does it find art to train itself with?

We need to provide the GAN with copies of what it’s trying to emulate – and the broader the subject matter we provide, the broader range of outputs we produce.  So if you want the GAN to produce portraits of people – it won’t learn to do that if you provide it pictures of landscapes.  On the other hand, if you provide it pictures of all art ever produced, the model will suffer from numerous challenges including taking a very long time to train, having a tendency to ‘cheat’ by just drawing one type of art successfully and ignoring other types, among other issues.  So we have to act as a teacher who guides the student to learn a reasonably curated set of examples. 

However once you have a trained artist, you can ‘transfer’ its learnings and begin learning a different subject matter more quickly.  So if you had taught an AI artist landscape painting, you can take a copy of that artist and tell it to train on portraits, and it would adapt to be a portrait painter much more quickly than if it started from scratch.  

“Summertime Sadness” – AI Generated Artwork

Q: What computer hardware is being utilized to create the artwork?  Is it local or cloud hosted?  

All machine learning these days is being done on graphics cards, primarily made by NVIDIA, because they’re really effective at the type of math required for training neural nets. Your typical Intel or AMD CPU is extremely fast at doing single tasks (or 8-16 tasks if we’re talking typical CPUs these days with multiple cores).  Compare that to graphics cards which are designed exclusively to do a smaller amount of math across millions of pixels in unison, and repeated at least 60 times per second. That graphic card structure is much more aligned to how neural networks behave, with millions of connections that need to be updated simultaneously during the training process. So we’ve been fortunate enough to get a very good NVIDIA card and run a lot of work locally, but it’s also required a lot of effort in managing memory and using a few hacks to live within memory and time constraints. 

We have been thinking about migrating to Amazon AWS for a couple reasons. First, running it locally means it fully occupies local machine – and especially as we approach the summer – it starts to heat up a room pretty fast! Also by going on AWS it would let us get quicker turnaround time on experimentation and simply not need to think as much about the trade-offs of complexity and training speed as much.

Q: How long does it take to train and how many pieces of art will it train with?  This probably translates into an average time per piece of training art?

Training and art generation are similar to what you might expect from a human artist, just accelerated. You might think about a human artist training for years to become a master, at which point each work of art is quite good, and then takes an order of magnitude less time to paint each new masterpiece than the cumulative training time up to that point. Similarly AI artists have a large up-front training time that can take days or weeks, at which point the artist is quite good and can produce new art relatively quickly. In terms of training materials, you can theoretically achieve ‘interesting’ results when you train on a very small dataset of dozens of images – sometimes an output with unusual visual artifacts or ‘stuttering’ can make for an appealing effect. But generally you want at least a dataset in the ‘low thousands’ of images and at the extreme upper end, maybe 50 to 100 thousand images. But that upper limit keeps coming down as new techniques and architectures develop.

One of the recent discoveries by researchers is that if you use aggressive augmentation of the images, it works almost as good as having extra original images.  So for example you can flip an image horizontally, zoom the image, slightly rotate or tilt the image, or adjust its color saturation slightly – and doing these small adjustments greatly improves the ability of GANs to produce a variety of output without needing an abundance of input materials to learn from.  

Q: What were some of the biggest hurdles in developing it? 

There are tons of hurdles that we didn’t expect! Whether coding, getting good data, seeing unexpected results, and even getting online with print suppliers. One issue that’s a bit more tech-oriented is simply the trade-off between resolution and compute time. So running a basic GAN at 256×256 pixels for 12 hours might produce great results. But to generate that at 512×512 – twice the width and height actually takes 4x longer to train – 48 hours.  Then going to higher resolutions takes even more time – so to get the quality of that original 256×256 image at 2048×2048 would take 768 hours (about 32 days)  – or so you would think.

You can imagine the frustration when you think everything is going to work and 8 days into training you check the results and every image looks identical – a phenomenon called GAN collapse. That happens when the networks get ‘stuck’ and only generate a single type of content. Working around these kinds of issues where you spend a lot of time waiting before knowing if you’ll have success or not – that’s probably the hardest hurdle.     

Q: Do you have any noteworthy improvements you want to make to the algorithm? 

There isn’t anything architecture wise that is mission critical, so any improvements are very ‘on the margins’ and a bit academic. But SOTA (State of the Art) is always changing, so we could completely overhaul the architecture if there was a compelling enough reason. There are many public GAN codebases which occasionally implement new techniques, so we kind of watch to see what is new & interesting there to see if we can implement similar concepts. Most GANs today rely on an image recognition architecture called convolution, which allows the AI to detect primitive shapes and stack them into more complicated features. So what gets us the most excited is stepping outside of convolution and some alternate architectures called Transformers (also known as Attention) and Diffusion Models, but they aren’t yet surpassing convolutional approaches just yet. So the old adage holds true, if it ain’t broke, don’t fix it!  

Q: How long did it take you to develop the app and output your first “usable” piece of artwork?

It took a few months to get things up and running where content looked decent. So much code and research papers are open sourced, so you can get basic items going quickly, but implementing some of the newer features like style mapping and augmentation takes more work. But the very first piece of content ended up not looking as great as we hoped when printed to canvas, so we went back to the drawing board and found solutions to get final output that looked good. I still have a copy of that early framed canvas hanging in my office as a reminder of where things started.

Q: Of the artwork generated, does a human look through for ones that look interesting? What percentage are rejected? What are the reasons for rejection?  General appearance does not look good, doesn’t fit the theme, too similar to other art pieces, etc?

Yes, we look through and are rejecting anything that looks subjectively ‘bad’.  It’s maybe 10% or so that get rejected.  So for example in the training data you might have a black and white image, which we thought was useful for capturing the foliage of a different tree style. But some small percentage of output images may appear partially black & white, and just don’t align to the collection very well.  Sometimes the unusual artifacts that get generated are actually really cool (see a blog post about the hidden secrets of AI Art here) but sometimes they’re more obviously ‘mistakes’.  The longer we train the models, the lower the rejection rate. But it’s sometimes easier and more enjoyable to just take a look at the artwork.  

Q: Does a person create the titles for each art piece?

It’s a mix – initially we created most of the titles by human means, and pulled in the expert assistance of my daughter! In one of our recent collections, called the Alleys of Old Europe, we made a simple GAN to generate names of European cities and used that to give every image a name belonging to a totally fictional city. So they have names like Midleshannon, Prejek, Afragliano for example, which to the best of our knowledge are not actual places, but sound like they’d fit right into someplace in England, Croatia and Italy. Certainly as we move forward, we plan to implement other systems that name the works as well.

“Korzë” – AI Generated Artwork

Q: Could a user ever “influence” the algorithm to generate more custom art?

In theory yes, and there are a couple of ways to achieve that. One approach we’ve seen in other GAN repositories is to label the training data, so while the GAN is trained on a large body of work, you have more control in terms of generating content which is true to the label.  So you could train it on a collection of cat, dog and horse images, but then only ask to have horses generated.  Another approach is using a technique called neural style transfer, where you can transfer the visual style of one image onto another – so if you can project the style of Van Gogh’s starry night onto a picture of your backyard. There are a few other techniques that can be adapted from of other GANs as well.  

Q: You say it is one-of-a-kind, how do you guarantee that?  The art is taken down and not available for purchase once someone does buy it?

That’s right – although the buyer has full control over what size they want the art produced at, once a single print is purchased, it is no longer available for sale again at any size. As for guaranteeing uniqueness – Depending on the model we’re using, the initial ‘seed’ of an image is based on a 256 to 1,024 digit random code, so the odds of seeing an identical work are astronomically small (i.e. you might say there are 10^256 to 10^1024 possible inputs compared to 10^80 atoms in the universe). Furthermore, as we train and update the models over time, the same random digit sequence won’t produce quite the same output either, even if you re-used the same seed.

Q: Does the purchaser also get the digital file?

At this moment, no, but we might consider expanding that offering. There is also a lot of excitement about NFTs – non-fungible-tokens, where buyers receive a digital copy of the art along with blockchain certified proof of ownership. We’re not sure if our target market tilts that direction – physical ownership has its perks! But it’s certainly something we’d be open to explore.

Q: Do you retain the original file as well in case a reprint is needed?

We don’t. We do retain the files during the refund period in the event a customer doesn’t want to keep their piece, and also to ensure that if the art is damaged during delivery we can offer a replacement.  However after the return window closes, we delete the high-res files and just keep the low resolution versions for marketing purposes, and to have a visual record of what’s been sold.  While it might be nice to have a ‘backup’ with us, we felt that customers having confidence in the uniqueness of their product was preferred.I see themes on the site. 

Q: What other themes can we expect upcoming? 

Within the broader impressionism school of art, we’re looking at still life, portraits, and potentially doing more narrow themes – impressionist “fields” versus impressionist “mountains” or “cityscapes”. We also have quite a few ideas in photorealism, but those require a bit more field work to gather original private content to implement. I think there is a lot of potential in combining styles and content that have never co-existed – just off the top of my head, maybe doing a series of sportscars with Japanese sumi-e style brush strokes.

Q: Any plans on other product offerings?  Whether it is different sizings or other mediums. 

In terms of sizing we can technically offer any size – so while we have a pretty good range on the website, in theory a customer can reach out and have something different produced. Moving to different aspect ratios is something that’s in the works as well.  There are some upper limits on size with current technology – so if you’re looking for 300 DPI images at 36 inches x 36 inches you’re getting images at 10,800 pixels, and even small increases start to exponentially increase computing requirements. We’ve also considered alternate finishing options, like acrylics, but for now we’re keeping the offerings relatively simple.  

Company site: https://www.artaygo.com/

Phone Scam Claiming to be The Mobile Shop

Update: Oct 1st 2020: I received a call back from The Mobile Shop after posting a message to them through their website. They verified this is a scam and were looking into it, but not sure if they could do anything about it. They clarified that they would never confirm identity details over the phone and you would need to physically go into their store to do so. And contrary to what the scammer will say on the phone, their stores are open.

The Scam

On September 24th, 2020, I received a call from 1-647-258-5447. I’m suspicious this is a scam situation.

Basically, they said they were with “The Mobile Shop”, the mobile phone stores inside Superstore. They stated their physical stores are closed due to Covid and they are doing promotions over the phone.

It started out suspicious in that they said they could offer promotions for all the major Canadian carriers. They then asked me info about my situation.

  • What carrier am I with?
  • How much do I pay for my plan?
  • How much data do I get with my plan?
  • What are my voice minutes and texts allowed?
  • What type of phone do I have?
  • What model of phone do I have?

It was on this last question that I was really wondering what was going on. So I didn’t answer and asked him what the deal was. He said his name was Raj and he could offer me $45/month for 8GB of data and unlimited calling and texting within Canada. No contract and no new phone purchase required. Well this was too good to be true.

Continue reading

Diamond Multimedia Dock DS3900

I have been running a Microsoft Surface Pro for a few years now. I had recently purchased a large monitor for my desk workspace and wanted to get more peripherals connected to just make my life easier when I was at my desk. Anyhow, I purchased the Diamond Multimedia DS3900 and it has been a really nice experience.

Essentially it provides a variety of ports and connectivity options and the compatibility with my Surface Pro 4 has worked flawlessly.

Continue reading

Scotiabank Passport Visa Infinite Card

I recently switched credit cards. I had been running the BMO AIR MILES World Elite Mastercard for almost 15 years. It worked well for the family and we used the Air Miles for short haul flights in our part of the country. However, we found we wanted some more flexibility with our rewards, similar travel perks, and I had the opportunity to have the fee covered by switching to the Scotiabank Passport Visa Infinite Card.

Essentially it is a very similar card in terms of travel features, but the rewards can be used a variety of ways. In particular I can apply the points like cash towards any purchase on my Visa that is travel related. So essentially I could always find the cheapest flight online though a travel booking site or airline itself, book with my VISA, and then I just transfer over the points to pay for it.

Continue reading

AWS Glue Crawler – Multiple tables are found under location

I have been building and maintaining a data lake in AWS for the past year or so and it has been a learning experience to say the least. Recently I had an issue where a AWS Glue crawler stopped updating a table in the catalog that represented raw syslog data that was being imported in.

The error being shown was:

INFO : Multiple tables are found under location [S3 bucket and path]. Table [table name] is skipped.
Continue reading

Trade Your Way to Financial Freedom – Book Review

I’m reading Trade Your Way to Financial Freedom right now. Excellent book, in my opinion. I had stopped doing any type of trading personally a couple years ago, but I have always found it interesting so decided to get active in it again.

I’m only going to “paper” trade for right now. This is “fake” trading to see how your performance is before ever risking any real money. A way to prove out a system somewhat to see how it would have done if you traded real money (though if you traded real money it would then have some “effect” on the market and may change the outcome).

“Trade Your Way to Financial Freedom” is a great book that isn’t about teaching you any specific system on how to trade, but it is more about how you evaluate the system you are using and yourself. It does briefly cover a variety of strategies and their basis. Not recommending one or the other, but discussing what is commonly used out there and why it may appeal to different individuals.

Personality

Chapter 3 has a great list of questions that you need to answer about yourself and try to gauge your capabilities. Items like:

  • How much time do you have to trade? Do you already have a full time job?
  • What is your risk tolerance? Will you lose sleep at night if you have money tied up in the markets?
  • What skills do you have? Good at math? Good with computers? Analytical?

Analyzing the System

A lot of terminology is used here but explained very clearly. Reward to Risk ratio is commonly used to guage how good your returns are compared to the risk you are taking. Position sizing is touched on. I’m only 3/4’s of the way through the book, but it has been excellent. I recommend the read to anyone thinking of getting into it.

Ford Pass Experience

I recently purchased a used 2017 F-150 XLT 302A SuperCrew truck. I jumped ahead quite a few years from my 2006 truck I was previously driving. One thing I was pleasantly surprised about was the technology Ford has in its vehicles. In particular the Ford Pass application.

Continue reading

2017 F-150 XLT XTR 302A Supercrew Truck

I recently switched trucks. Went from a 2006 Dodge Quad Cab to the 2017 Ford F-150 XLT XTR 302A Supercrew Shortbox. It felt like the right time to make a change as the family was growing and having a proper crew cab for some more space in the interior seemed like the right choice. In this article I will talk about what I was looking for and why I picked this truck and some of the useful tips and tricks there are with the F-150.

Why This Truck?

I was looking for several things specific to what I wanted and I have listed them in an order that is generally more important to less important.

  • Large seating room in the back. The Supercrew F-150 has loads of room a the back and I have 3 children who are into the school age now, so piling into the back with backpacks and child seats takes up a lot of room.
  • Aluminum body. Less opportunity for rust to set in. I had this issue with my last truck. For a few reasons really. I’m not that diligent in washing my vehicle, I live in a climate where road salt is used, and I park in a heated garage where that salt can just eat away with the moisture from melting snow and ice.
  • XLT trim. This is only the 2nd lowest trim package they offer, but it is the highest trim package that still offers 6 seats (see below). It is also a lot more reasonable in price.
  • 6 seats. This is related to the XLT trim, but we occasionally use that additional seat in the front middle. When you don’t need the seat, the back rest folds down for an arm console that has a storage area with flip up lid and there are also 2 cup holders. There are an additional 2 cup holders that can be made available on the floor if you need.
  • 302A package. This is the top luxury package available on the XLT. Lots of little things such as:
    • Heated seats
    • Large console screen
    • Sync3 infotainment screen that supports Apple CarPlay
    • Remote start
    • LED lighting in the truck bed
    • 400W 110V plugin in the front
  • Shortbox. Only a 5.5′ box, but I don’t need that much room (stores occasional work supplies, load of dirt or bark mulch, and hockey gear). This kept the truck around the same length as my Dodge Quad Cab so it fits in the garage the same way.
  • Has the tailgate step as well.
  • White – I find it easier to keep clean and not show scratches compared to black.
  • Better gas mileage than before. I was driving the Dodge Hemi so gas mileage was worse, but I don’t put enough mileage on for it to really matter. I always could have found a 3.6L pentastar in a Dodge too for better mileage.
Continue reading

Creating Data-Driven SSRS Reports in SQL Server Standard Edition

I recently had to resolve this issue. I was running SQL Server Standard Edition and needed to have an SSRS subscription that behaved like a data-driven subscription. Data driven subscriptions are available in Enterprise Edition SQL Server, but not Standard.

What is a Data-Driven Subscription

This is a very useful tool in the Enterprise Edition. A regular subscription is simply a scheduled time when a report will run and can be sent to email addresses or saved to a network share. The individual subscription will only run once and utilizes the same report parameters and recipient information.

A data-driven subscription lets you run a database query that returns no results or a set of results. The report is then run as many times as there are row results in the original database query. This allows you to 2 important things:

  • Not run the report if it does not have any meaningful results. e.g. Don’t send anything out.
  • Run multiple reports at once with customized parameters (who it is sent to, the subject of the email, the parameters used to generate the report, etc). This allows you to scale subscriptions really easily without having to go in and create a subscription for each recipient manually every time.

But I Do Not Have SQL Server Enterprise Edition

This means you do not have the data-driven feature when creating subscriptions. If you are doing a lot of data-driven report subscriptions, it may be worth your while to pay for that license as it will be less development time and more intuitive for users to understand what is happening than what I will describe below.

So we will have to come up with an alternative method to achieve something similar.

The Design

  • Publish an SSRS report if it is not published already.
  • Create a regular subscription using the SSRS interface.
    • Set any parameter values that are constant.
    • You may set the other parameters, but they will be overwritten anyways.
    • Set the schedule to have a stop date in the past, this makes sure it is active, but will not run on its own.
  • Create a stored procedure to run our report.
    • Code example in detail below.
    • Needs to run a query that provides how many times you want the report to run and what the override parameters should be.
    • Will loop over the results to:
      • Override the parameters stored in the table.
      • Execute the report.
      • Wait until the report is finished.
      • Continue.
  • Create a SQL Agent job to run the stored procedure on a schedule.

A Note Waiting For Each Report To Finish

Calling EXEC ReportServer.dbo.AddEvent does not block the script until the report is finished. It merely adds the report to the queue to be processed. If the report takes a while to run, this script may overwrite the parameters and settings being used. So it is important to have a loop monitoring the Events table to make sure the report has completed before continuing.

Stored Procedure Example

This example will determine how many reports need to be created and run each report consecutively by waiting for the report to finish before triggering another one.

--Steps to create data-driven like subscription in SQL Server 2016 Standard Edition:
--1. generate a subscription for a single user and note the subscription_id
--2. create stored procedure to update settings and generate reports
--3. schedule a job to run this procedure

-- Notes to Remember
-- Need to use a Job to run this and schedule.  
-- The subscription_id will be generated at the time the report subscription is created
-- The subscription should be set to active with a stop date in the past so it doesn't run on its own if not desired, but is an active report subscription.

IF EXISTS ( SELECT 1 FROM sysobjects WHERE id = object_id('DataDrivenSSRSSubscription_MyExampleReport) AND OBJECTPROPERTY(id, N'IsProcedure') = 1)
BEGIN
DROP PROCEDURE dbo.DataDrivenSSRSSubscription_MyExampleReport
END
GO

CREATE PROCEDURE [dbo].[DataDrivenSSRSSubscription_MyExampleReport]
@subscription_id varchar(50),
@parameter_id int

AS
BEGIN
    DECLARE @v_email varchar(256), @v_parameter1 varchar(256)
@settings varchar(4096), @params varchar(4096), @qry varchar(1024), @wait smallint
           
    DECLARE db_cursor CURSOR FORWARD_ONLY READ_ONLY FOR  
SELECT
*
FROM Some_Table
WHERE Some_Table.parameter_id=@parameter_id

    OPEN db_cursor  
    FETCH NEXT FROM db_cursor INTO @v_email, @v_parameter1
   
    -- looping through the list
    WHILE @@FETCH_STATUS = 0  
    BEGIN  
        -- wait until report has been cleared from the reporting queue
SELECT @wait = COUNT(*) FROM ReportServer.dbo.Event WHERE EventData=@subscription_id
WHILE @wait > 0
BEGIN
WAITFOR DELAY '00:00:01'
SELECT @wait = COUNT(*) FROM ReportServer.dbo.Event WHERE EventData=@subscription_id
END

        -- update parameters before generating report
SET @settings = '<ParameterValues>
  <ParameterValue><Name>TO</Name><Value>'+@v_email+'</Value></ParameterValue>
</ParameterValues>'

SET @params = '<ParameterValues>
  <ParameterValue><Name>Parameter1</Name><Value>'+@v_parameter1+'</Value></ParameterValue>
</ParameterValues>'
UPDATE ReportServer.dbo.Subscriptions
SET extensionsettings = @settings,
parameters = @params
WHERE subscriptionid = @subscription_id

        IF @@error = 0
        BEGIN
            -- run report and send email
EXEC ReportServer.dbo.AddEvent @EventType='TimedSubscription',
@EventData=@subscription_id
        END                
        FETCH NEXT FROM db_cursor INTO  @v_email, @v_parameter1
    END  
   
    CLOSE db_cursor  
    DEALLOCATE db_cursor

END    

References

These are some really good references that may be easier to understand. One note though, they do not cover “waiting for each report to finish before moving on to the next report.”

« Older posts