FrontBricks, my LLM-based weekend project which is inspired by Vercel’s V0

Since 2022, there is a hype of generative artificial intelligence and it resulted in a bunch of cool projects. Although a lot of us may remember that Github’s copilot was much older. Those days, I wrote an article about how I was too cheap to pay $10 a month for copilot, so I made my own!

That was somehow the beginning of my interest in AI field. I spent around four years in this field and like most of us, I tried to utilize different tools and products. In this article, I’m talking about FrontBricks which is my newest product and how it started as a weekend project!

A little bit of history

In 2023, I launched Mann-E which is an AI image generator based on its own models (and more information is provided in the website). A few months ago, I also launched Maral, which is a 7 billion parameter LLM specialized for the Persian language (the language I speak).

Also, around a month ago, I did some tests with brand new LLMs such as LLaMa 3, in order to make Mann-E Search which can be somehow an alternative to Perplexity but with a little difference (it doesn’t provide a chat interface).

I guess this can clarify how I am drowned in AI space and how much I love generative AI! Now we can talk about FrontBricks!

What is FrontBricks?

You may be familiar with Vercel’s V0 which is a generative AI tool helping people generate frontend components. I liked their idea, and I joined their waitlist and a couple days later, I got access to the platform.

It was a cool experience, and some sparks formed in my head. I found out that pretty much all LLMs are really good at the task of code generation, and we can utilize one to generate the code and use another one in order to find out if the code is valid or not.

This was my whole idea so I sat at my desk and started to code a basic tool to send my prompts to OpenAI’s API in order to generate and then another one to do the validation using LLaMa 3 70B and GPT-4 as well (I used OpenAI again).

I also found another bottleneck, which was JSX code generation. I did a little bit of research and I found that is not really a big deal and using the power of Regex and text manipulation, it’s easily possible to turn pure HTML to JSX!

I wrote pretty much everything, so I just switched to my work environment, created a simple rails app and then connected it to my backend module. Now, I have a platform which can be an alternative to Vercel’s V0!

Today, I am just announcing frontbricks, but I have to say before this post around 211 people gave me their email addresses to put them in the list of early adopters and I gave them access to the platform earlier this week!

My birthday (May 30th) was in this week, so I guess it can also be a bit of surprise for my friends and the community.

How can I access FrontBricks?

Well, it is easy. You just need to go to frontbricks.com and create an account (sign up link). Then you just need to confirm your email and boom, you have unlimited access to FrontBricks, completely free of charge!

You can generate a component, then improve it and every time you felt you need a new component, you easily can choose to create a new code snippet. It is as easy as drinking a cup of tea.

Future Plans

Since this project isn’t monetized yet, the very first thing coming to my mind is a way to monetize it (you still can donate in crypto through this link). A good business model can help this project be much better.

I also am thinking of releasing an open source model based on the data provided on FrontBricks, because one of the reasons I coded this project is just that I couldn’t find a model specialized for front-end generation!

These are my concerns for now. If you have any other ideas, I’m open to here.

Conclusion

I have a haystack of ideas in my mind, and if I find enough time, I implement them. Mann-E and FrontBricks are just two of projects I just made and to be honest, Mann-E with around 6000 users and more than 50,000 generated images, is somehow one my most successful projects.

FrontBricks has potential, but I guess I can’t keep it up alone. I’m open to technical and business ideas as well. So if you have any ideas in mind, feel free to send me a message, my email is haghiri75@gmail.com 😁

Nucleus is the proof that “Small is the new Big”

No matter what you heard, size matters. Specially in the world of AI models, having a smaller and more affordable model is the key to win the competition. This is why Microsoft even invested time, GPU and money on Phi project, which is a Small Language Model or SLM for short.

In this post, I represent Nucleus. My newest language model project, which is based on Mistral (again) and has 1.13 billion parameters. And of course, this post will have a s*it ton of reference to HBO’s Silicon Valley series 😁

Background

If you know me, you know that I have a good background in messing around with generative AI models such as Stable Diffusion, GPT-2, LLaMa and Mistral. I even tried to do something with BLOOM (here) before but since the 176B model is too expensive to be put in the mix, I left it behind.

But later, I started my own AI image generation platform called Mann-E, and in previous weeks, my team delivered Maral, which is a 7 billion parameters language model specializing in Persian language.

After observing the world of smaller but more specific language models (should we call them SMBMLMs now?) like Phi, and also after observing the release of TinyLLaMa, I just started a journey to find how can I stay loyal to Mistral models but make them smaller.

You know, since the dawn of time, the mankind tried to make things smaller. Like smaller cars, smaller homes, smaller computers and now smaller AI models!

Basic Research

In my journey, I just wanted to know if someone ever bothered to make a smaller version of Mistral or we have to go through the whole coding procedure ourselves.

Lucky us, I could find Mistral 1B Untrained on HuggingFace and even asked the author a few questions about the model. As you can see, they’re not really okay with the model but I saw the potential. So I decided to keep this model in my arsenal of small models for research.

Then, I searched for datasets, and sparks started in my head about how can I make the damn thing happen!

The name and branding (and probably Silicon Valley references)

The name Nucleus comes from HBO’s Silicon Valley. Which is by far my most favorite shows of all time. If you remember correctly, Hooli CEO, Gavin Belson had something to do to piss Richard off, right? So he made Nucleus. But his Nucleus was bad. I tried to make mine better at least 😁

Since we know it’s time to pay the piper, let’s waste less time and jump right into the technical details of the project.

Pre-Training

Since the model claimed to be untrained we can understand that it only knows what the language is right? Even if now you try to infer the model on HuggingFace or locally, you may get a huge sequence of letters with no meaning at all.

So our first task was to pretrain that. Pretraining the model was quiet easy using a 3090 and spending 40 hours. It was done on the one and only TinyStories dataset.

Actually this dataset is great for pre-training and giving the base models the idea of the language and linguistic structures. It does it pretty well. Although since it only has 2 million rows, you have to expect huge over-fitting which can be easily fixed trough fine tuning the model.

Training on Tiny Textbooks

Well, the whole point of Phi 1 was that textbooks are all you need and since Microsoft doesn’t like to share their dataset with us, we had to perform a huge research on available options.

The very first option coming to my mind was using GPT-4 to generate textbooks but it could be astronomical minding that we are not funded and spending a few thousand dollars on a dataset? no thanks.

So during this research procedure, we discovered Tiny Textbooks dataset. Apparently Nam Pham did a great thing. They crawled the web and made it to textbooks. So kudos to them for letting us use their awesome dataset.

Okay, fine tuning called for another 40 hours of training, and it was fine. After fine-tuning for two epochs and 420k steps each, we’ve got the best results we could get.

Results

On TinyStories, the model really loved telling stories about Lily and there was no surprise for me at least. But on Tiny Textbooks, the model did a great job. Okay, this is just the result when I asked for a pizza recipe:

And as you can see, it’s basically what HuggingFace offers. With a little bit of settings, you easily can get good results out of this baby!

But sadly it still sucks at two things (which are basically what make you even click on an LLM-related link). First is question-answering (or instruction following) which is not surprising and second which made me personally sad is coding since I am a developer and I like a well-made coding assistant.

But in general, it can be competing with other well known models I guess. it all depends on what we train the model on, right?

But it still needs more effort and training, so we are heading to the next section!

License

If you know me from the past, you know I love permissive licenses. So this model is licensed and published under MIT license. You can use it for commercial use without any permission from us.

Further changes and studies

  • The model does well on English. But what about more languages? My essential mission is to try to make it work with Persian language.
  • It is good at generation of textbooks and apparently loves food recipes and history lessons. But it needs more. Maybe code textbooks are fine.
  • The model should be trained on pure code (StableCode style) and also code-instruct style (I haven’t seen models like that or maybe because I am too lazy to not check all the models).
  • The model should be trained on a well-crafted instruct-following dataset. For me personally, it is OpenOrca. What do you suggest?

Links

Donations are appreciated

If you open our github repository, you will find a few crypto wallets. Well, we appreciate donations to the projects, because we’re still not funded and we’re waiting for investors’ responses.

These donations help us keep the project up, make content about them and spread the word for Free/Libre and Open Source Software or FLOSS!

Conclusion

In the world where people get excited about pretty much every react js app wrapped around OpenAI’s Chat API and call it a new thing, or companies try to reinvent the iPod with the power of ChatGPT, and also make a square shaped iPod Touch, new models are the key to keep our business up.

But you know, if models are still huge and you can’t run them locally, this will call for more and more proprietary stuff where you have no control over the data and you may end up giving up the confidential data of your company to a third party.

Open source small language models or Open SLMs, are the key to have a better world. You easily can run this model on a 2080 (or even less powerful GPU) and you know what it means? Consumer hardware can have access to good AI stuff.

This is where we are headed in 2024, a new year of awesomeness with open models, regardless of their size.

Maral is here, 7 billion parameters bilingual model with support of Persian!

If you read my previous post, you know how much I like open source AI material, and I even jokingly titled my BLOOM post I was too cheap to pay for GitHub’s copilot! So making an open source model was always one of my goals of life. Also, in my Persian blog, I pointed out that the dominance of English language in current LLM scene is a little bit concerning (read it here).

Now as of today, I am pleased to announce that Maral is here. The 7 billion parameters bilingual model which can respond to Persian and English prompts, and can produce GPT-3.5 level of answers based on the dataset we fed to it!

Maral 7B alpha 1 and its advantages

Since the release of GPT2 and BERT models, there were efforts for making a Persian text generation model in our community. But to be honest, most of them left untouched in middle of the road.

In last years AI revolution however, people saw potential in the realm of generative AI and started working on models. From RAGs on existing models to fine-tuning basic models which could somehow understand Perso-Arabic alphabet.

But with the release of Mistral model, everything has changed. I personally never thought a 7 billion parameters model can understand multiple languages this well. So I put more information on the next section of the article on why Mistral became my number one choice as the base model!

However, the biggest problem was still there and it was the dataset. Finding a good enough dataset is always a bottleneck. But we’ve been lucky enough that one of Iranian developers, has translated Alpaca Dataset to our beloved Persian language (and it’s accessible here).

When you’re in possession of needed ingredients for your potion, I guess it’s time to light up the caldron and start making the potion!

Why Mistral?

As a developer and an enthusiastic person, I always try new models and tools specially when it comes to text. Mistral was the new kid in the corner and I personally witnessed a lot of positive reviews about it. So I tried these:

  • Loading and testing model on normal English tasks it was good for.
  • Testing model on some more complicated task such as reasoning or basic math.
  • Testing the model on code generation.

All of the above tests passed very well. You probably never expect a middle sized model to perform well on all of the given tasks, but this one was a little different. Although it was a little bit confused in reasoning tasks, I could pass on that (since even GPT-4 has problems with reasoning).

But I always do another tests on these models, because I’m Iranian and I speak Persian/Farsi, and I really like to know how model performs on my language. So these were what I have tested:

  • Generic Persian text generation, when the model started generating nonsense but it showed me the potential, I had a guess it may have seen some Persian text before.
  • Asking Persian questions, it tried the best to put words together but at some point, it returned to nonsense or even answered completely in English!
  • Translation! Believe it or not, it can be a very good measure of accuracy in multilinguality of the model (Okay, I made that term up, stay calm). Although model was successful in English to French and Spanish (with my very limited knowledge), it haven’t performed well on Persian.

Okay, the test showed me the potential. So I had to team up with my colleague and make it happen! Let’s add support for our mother tongue to this model!

Train procedure and infrastructure

Now let’s talk about the fun stuff. First, we saw that we may need a very big and somehow unaffordable (at least for us) infrastructure to train mistral from scratch.

So we performed a big research on the topic and found these methods:

  • Retrieve-Augment Generation (RAG)
  • Quantized Low Rand Adoption (QLoRa) and Parameter Efficient Fine Tuning (PEFT)

To be honest RAGs are cool, but they won’t lead to a new model. So we tried QLoRa and PEFT.

The basic training (with extremely inaccurate results) have done on a T4 (Colab’s free tier) and then we’ve decided to go further. So I went after our friends at Jupyto, a company where you can rent GPUs hourly from and based in Iran.

They had great offers for powerful GPUs and we got our hands on a 3090 Ti with 64 GB of RAM. It was a perfect machine for doing the training and we’ve trained the better model on this setup.

The QLoRa training took over 10 hours for 5 epochs (each epoch took more than 100 minutes) and the results were out of this world! It could give us text which is semantically and grammatically correct!

Then, we’ve merged the adapter to the base model to take advantage of the main knowledge of the model as well.

Although, I personally faced a set of problems which I will point out int the next section.

The problems you may face using Maral

Since we’re on our alpha stage, I have to admit you may face these problems while using Maral, specially on Persian language.

  • The prompt format is based on Guanaco format. So it doesn’t have tokens for start and end of sentences.
  • The tokenizer is not optimized for Persian letters yet. So it may make it slow on Persian language.
  • The model is really good at hallucinating.
  • According to the previous item, it also easily produce misinformation. So please be careful with the answers you get from the model.
  • The model likes to repeat itself a lot. So If you get a repetitive answer, do not worry.
  • Model being so large, is a little hard to deploy on consumer hardware. However in the HuggingFace page, we’ve provided 8 bit loading instructions as well.

Furthrer works

  • Optimizing tokenizer for Perso-Arabic alphabet.
  • Providing a better dataset.
  • Add bos_token and eos_token to the tokenizer, specially for instruct following/chat model.
  • Providing GTPQ, GGUF or GGML models to make it more affordable on consumer hardware.
  • Making much smaller models (say 1B or 2B) with more focused niche.

Related links

Re-creating Midjourney with only $10 – Technical Report for Mann-E 5 development

The year 2022 was an amazing year for generative AI market and no one can deny in this year, release of some cool models such as Midjourney, Stable Diffusion and ChatGPT made this market bigger, better and more competitive. You may also know Mann-E, the model I have developed on top of Runway ML’s Stable Diffusion 1.5 using Dream Booth. In this particular article, I provide you with a report for the development procedure of Mann-E 5, which will be accessible at April 14th 2023 on Mann-E Platform.

Introduction

The Intention

The main intention of the Mann-E at first place was a personal discovery of AI Art and text-to-image models, but later I found the business/commercial opportunities and since I also am an open-source enthusiast, the main intention changed to providing an easy and accessible open-source alternative to midjourney.

Since Midjourney is only accessible through Discord, it’s expensive (compared to most of other image generation models) and there is also a huge problem for Iranian users to use the basic or standard plans, the idea of a platform for art generation.

The method

For this particular version, I used self-instruct method which was used for Stanford’s Alpaca dataset and model. The tools used for this project were as following:

  • ChatGPT
  • Midjourney
  • Dream Booth

The Procedure

Using Midjourney

The main idea of using midjourney generated images in the fine-tuning process sparked in my mind from PromptHero’s Openjourney project. They used Dream Booth and data from Midjourney version 4.0 at first, then they did the train on more than 100K images on their own infrastructure.

So, Midjourney became a good source of data, because you probably won’t face any intellectual property or copyright issues in the process of using images created by their algorithm (the full explanation is available in my previous post).

ChatGPT as a prompt engineer

I’ve seen people create great prompt for Midjourney using ChatGPT. As a large language model, both ChatGPT and GPT-3 (and GPT-4) can be great choices for creating prompts. So I’ve chosen ChatGPT since it had a free interface and also more affordable API’s.

P.S: There are also different models which we can use in order to generate prompts, but they may need extra setup. They’ll be explained in future researches.

Dream Booth

The most affordable way of creating your own text to image model is Dream Booth. It can be executed on a free Colab Notebook and there are also tons of tools available for doing the job.

For development of Mann-E 5, I used the code from this repository. Although some modifications were needed, but the whole code is perfect.

Development of Mann-E

Getting needed tools for development

First, I had to open an OpenAI account. Which I had done before. I needed to use ChatGPT in order to get prompts.

Then, I got a basic license for midjourney, which gives you ~4 hours of fast generation and you can generate images without any difficulties. Of course I’ve done a few test (around a hundred images) before getting to work.

The last thing was writing a piece of python code to crop my images to 512×512 (midjourney upscaled outputs are 1024×1024) and also save the respective prompts in a text file.

Then, I used fast Dream Booth code in order to fine-tune Stable Diffusion version 2.1 on the dataset I just gathered.

Using ChatGPT as a prompt engineer

Currently everyone is using ChatGPT in their advantage, since it was the most revolutionary LLM of 2022 and probably 2023. For the development process of Mann-E 5, I’ve used GPT-3.5 Turbo (which is the free one) and it generated pretty good prompts.

This is the example of message for prompt formula:

Here is a Midjourney prompt formula:

raw photo, [the subject], [mood], [time of the day], [art style], [camera], [lens]

The “mood” can be realistic, cyberpunk, steampunk or futurist.

Do you understand?

As you can see, I just gave it the way of my prompting. Then it responded:

Yes, I understand the Midjourney prompt formula you have provided. It includes a raw photo as the starting point, followed by information about the subject, mood, time of day, art style, camera, and lens. The mood can be one of four options: realistic, cyberpunk, steampunk, or futurist.

The response shows it’s now ready to be asked for new prompts. Then I asked it for 5 to 10 prompts per idea. It means I could have 20 to 40 prompts per idea so I was set for Midjourney image generation.

Here is how I asked it about prompts:

Give me five prompts for “ruins of a roman temple”
and here is how it gave me the set of prompts (it’s trimmed for this article).
A striking black and white image of the ruins of a Roman temple, with dramatic shadows and highlights emphasizing the structure’s grandeur and decay, shot at night with a modern digital camera and a wide-angle lens.
If you spend time on Midjourney prompting, you will notice it’s a pretty good prompt, even if it doesn’t follow the formula very well.

Generating images using midjourney

This was the easy part. The whole process was feeding ChatGPT generated prompts to Midjourney and upscale and download the images.

The result was 464 images with different prompts which included different moods, styles and genres.

Pre-processing the dataset

Since Stable Diffusion only accepts 512×512 or 768×768 images as the input data, I had to write a simple python code to do the resizing using opencv.

Also there was an excel file including image file names and prompts used for image. I had to add a function to turn each prompt to a text file with the same name as the image files.

Training Stable Diffusion using Dream Booth

Unlike Mann-E 4, Mann-E 5 is based on Stable Diffusion version 2.1 (512px version). The training was done in two different steps.

In the first steps, it was 5440 steps of Dream Booth training (which is calculated by (number of images * 10) + 800 formula) and 928 steps on the text encoder to understand the trigger words.

In the second steps, the resulting checkpoints and weights of the first steps were tuned on 10880 steps (twice the first one) and 928 text-encoder steps to get the resulting images closer to the dataset.

It took total of 4 hours of training on a T4 shared GPU on Google Colab. Of course upgrading the colab plan to pro or pro+ can be beneficial in order to get better GPU’s and better training time.

The Results



Further Study and Research

The new model still has problems in photo-realistic images, but does a great job on illustration and concept art. So for now, it can be considered an artistic model. In the future, the other side also most be fixed.

The next thing is trying to tune the base model (whether Stable Diffusion version 2.1 or Mann-E checkpoints) on a larger dataset with more diverse images in order to get it closer to Midjourney.

Conclusion

Using pre-trained and available AI models such as ChatGPT not only elevate people’s lives, but also helps even AI engineers and developers to have more concern free data for their projects and products.

Also using Midjourney as a tool for creating Royalty Free images is a wise choice specially when you try to create a brand new text to image AI model.

In conclusion, I can say I’ve got much better results this time, because I utilized both ChatGPT and Midjourney for my needs. The checkpoints for Mann-E 5 will be available at HuggingFace on Friday, April 14th, 2023 at the same time of the public release of Mann-E platform.

You don’t owe money to the brush company if you sell your art

In my previous post, I explained how the future of the content is AI. Also, in an older post, I was talking about how AI generated content can revolutionize the world of interior design/architecture. In this post however, I’m not talking about these topics and I’m going to talk about legal issues and questions about AI generated art, and there will be a twist at the end. Wait for it 😁

AI content creators are concerned about legal stuff

Yes, they are. If they are not, they are making a very very big mistake. When you create any form of content, one of the most important aspects of publishing it is the legal issues.

These legal stuff are usually about the rights of content creators over their content and also the rights of companies who develop the tools for content creation.

In this part of the article, I am talking about what I guess is the important legal topic in this new generation of content creation.

The Ownership

The very first time I posted about my own AI art generator model Voyage in a Telegram chat room, one of my friends asked Who owns the generated art? You? Or us? and I explained since you have to run the generator on your own computer, you are the owner of the generated art and you don’t owe me anything.

By the way, most of them gave me huge credits when they posted their artwork on social media or even on the very same chat room.

But I found out most of those proprietary art generators like Midjourney don’t act like that. They make you pay them if you want to own what is your own. Let me make this a little bit clear.

Imagine you are going to buy a nice set of brushes and colors. You paid for it, right? Now you made a beautiful piece of art with those tools and now you want to sell it. Now imagine the brush company asks for shares! Isn’t it hilarious? of course it is. I believe this must be considered by AI Artists who use these proprietary tools to generate content.

Use by and for minors

another important topic in the new generation of content creation tool is always how minors will use it? and it also concerns me a lot (specially since Stable Diffusion 2.0 has no NSFW filtering). So what should we do for our younger friends? A lot of content creation platforms like YouTube, Pinterest, Instagram, DeviantArt, etc have their own policies and filters for public content distribution.

For example, I’m a big fan of horror movies and when I search about content about them such as reviews, fan arts and even scripts, I usually face the age confirmation pages and modals. Now you can understand where will I go with this topic.

AI is dumb, it cannot understand what it generates and we need a little more human observation on the generated content. For example in Stable Diffusion’s discord, I remember reacting to NSFW content by a certain emoji, could mark it as potentially harmful and then they could improve their NSFW filtering system.

Plagiarism

I guess you thought I don’t give a fine F about copyrights, right? No it’s not true. I believe artists and content creators should be credited well. So let’s talk about another topic which seems very important.

The very first day I started AI content generation, there only was a good free (in any sense of the word free) and it was VQGAN+CLIP. It was a great tool to make art and even today it has a unique quality of art comparing to other tools.

But even those days, I had a huge concern. What if I plagiarize another artist’s work? and this concern was at its highest form when I figured out adding names of well known artists such as Greg Rutkowski, James Gurney, Thomas Kinkade, Salvador Dali and thousands more can alter the result for us! So as both AI generator developers and artists, we should pay attention to this matter as well!

And last but not the least: Fake Art!

One of my most favorite activities is trying new artist names in my prompts. I love to see how their minds would paint what I’m thinking of. But there is a problem, What if I say this is an unreleased painting by a well known artist? and this can lead us to a huge money fraud.

I never could stop thinking about these matters, and as a person who developed a model and generated tons of content with AI, I never want to be classified as a fraud or scammer or even a person who disrupts the work of other artists.

I guess we talked enough about legal issues, let’s get to the big plot twist of this blog!

Big Twist!

The young blonde woman in the picture is beautiful. Isn’t she? I made it using my model Voyage which I introduced earlier in this blog post. You want to use Voyage and create your own art? Fine. You won’t owe me anything if you do. And if you want to use it in Google Colab, here is the link to the notebook!

Voyage is trained on the data crawled from OpenArt and as you can see, it is a model which can work with a very artistic feel comparing to other models which are available.

Conclusion

In this blog post, we discussed about one of the important aspects of AI content creation/generation which is legal stuff. We also have to fight for our rights of ownership as content creators. In my personal opinion, it is okay to ask for money for a service. We pay a lot for infrastructure and computing power as developers or companies but if we make our users pay us shares, I guess it’s not fair.

In the other hand, we need more and more open source tools for AI content creation. Big tech companies are ruling the market in this world as well and it never is good.

I hope this article was useful and if you like more content like this, please consider sharing it with your friends 🙂

The future of content is AI

I personally never counted myself as a content creator but apparently, I always have been counted as one. Why you may ask? The answer is easy. I have a habit of filming my work, writing blog posts (mostly in Persian), posting my work and code on twitter and stuff. All of these are behaviors from a content creator.

My content on the other hand were mostly about me, I never cared about making those type of advertisement reports (where you have to care a lot about SEO, back-links and stuff) because it wasn’t my job to create the content. Now, I am thinking about it, but my own way.

The history of content creation

Before going deep about this, let’s clear something. This part of the article is from my own point of view and it’s not a certain history, but at least, this is how I saw content creation and how it works.

One-way content generation

Let’s go back a lot. I mean A LOT! Maybe in 2006, you opened a URL in your Internet Explorer and then find out a very ugly static website written in pure HTML. Some of those websites also had some annoying JS functions (we should be grateful about the modern use of JS, there are no mouse pointer following figures or rain in background anymore!).

This is an example of a one way form of content. The content you can not react to as is. You had to find an email address in Contact Us page, or fill their forms and usually they did not ever viewed the respective inbox. So you couldn’t help them improve their content or right their wrongs.

Here comes the blog

I almost was 12 when I discovered the concept of blogs, and I also started writing in a free blogging service (which is very popular among Iranian community and you can find it here) and it was amazing.

The whole greatness of blogging was that it wasn’t “one-way” and people could interact with each other using comments and at the same time, chatrooms were also pretty popular. So we usually had a good time with our internet pals those days. And you know what does that mean?

User generated content (UGC) matters!

It really does. Imagine you want to get a new hair dryer. So what do you do? I guess you go to amazon and search for hair dryers. A hair dryer is not an object you buy once a week, so you need to know that the hair dryer in question lasts enough or not, how much power does it take and does it meet health guidelines and regulations for a product like that?

You just read the description, specifications and other details provided by the seller on Amazon. It’s good, but not great. You have an idea about the product, but you don’t know how is its user experience. What can we do about this? Easy, we scroll down to the user reviews. Were people rated and described their feelings about the product.

In the reviews section, you find out this product doesn’t last that much, you even may search in other platforms about the very same product and find out what is wrong with the product in question. For me, the second platform is always YouTube. People do a lot of good product reviews on YouTube (even those who got sponsored by the brand we’re looking for, are usually helpful as well!) and guess what? YouTube is also a platform for UGC!

But what, it doesn’t end here. You read this post but you still are confused about the title. I have to say this is where the actual fun begins!

The future of content is AI!

Now this is the part you were waiting for, in this section, I’m going to talk about how AI can help us create better content because recently, I follow the trend of AI art a lot! I also coded and developed some AI Art tools myself! I also was too cheap to get copilot paid membership, and created my own version. See? I officially joined the army of content creators, but in my very own way.

Sentiment Analysis

I guess this one is not really about content creation but more about content moderation. But moderation is as important as creation (if not more) and I had to put it here. Having a sentiment analysis system on our user generated content, can help us find if the product has poor quality or how toxic our community is or something like this.

To be honest it helps us more than it seems. It helps us make a better community (pretty much by banning suspicious users) and also give feedback to our suppliers who sent us products with poor quality. It doesn’t end here by the way, my example is still about a retail store and not a general website.

In the modern day, you have to watch your tongue more than before. A lot of people stood for their rights and the typical words of your daily speech can be offensive to other people. So in this particular case I believe these analytic tools can help us improve even in our personal lives by having a better community.

We talked enough about content moderation using AI, let’s go to the fun and interesting topic of content generation!

The rise of AI art generators

AI art is basically an empire now. AI art generators such as Dall-E 2 and Midjourney (you probably would like to take a look at my open source version of midjourney, OpenJourney, just saying) are very popular and in the other hand, Stable Diffusion (and forks) are really growing in the open source side as well.

You cannot deny the fact that these are pretty cool tools of content creation. These tools can help us bring our ideas to life in forms of art, 3D design, interior design, UI/UX and a lot more. So we have to talk about these, we have to recognize these images as the new content people create and enjoy!

It does not end here as well. There is also a new trend of Text To Music which means a lot of music creators (me included!) may use AI to create music as well. This is the beauty of AI content creation.

And finally, everyone offers AI these days.

Yes, every company which had an even small relation to content creation, offers AI! We expect big names of our industry such as Google or Meta provide tons of AI tools such as libraries, frameworks, models, datasets and even programming languages. But do you know what amazed me recently?

Notion also provides AI solutions for productivity and ideas! You basically can have some sort of copilot for your content calendar or even better (in case of some people worse) an ai companion for task management and I think this is great.

Now we have tools to create text, images, videos and sounds, what should be our next step? I guess we have to read minds (and I’ll write an article about that as soon as possible).

Conclusion

Now let’s conclude (I know, I have this section on every blog post and I don’t put anything useful here). We just found out where we have started the age of digital content creation. Internet had a great role in revolutionizing this age and opened new doors of opportunity for us, people who usually couldn’t get the chance of writing in a magazine or newspaper easily. These days we write on Twitter (at least until we can write without paying Elon Musk for that!) and it needs no privilege. It only requires an internet connection.

So AI can help us improve our content, it can help us write better reviews, it can help us turn a bunch of photographs into a full report. You just input your photos, the image-to-text pipeline starts and extract details of each photo, then you edit them and now you have your reports.

In my opinion, AI is there to help us make the world a better place. Because it provides us an equal chance of being author, artist, musician and anything which required some level of privilege in the past.

Severus does the magic

It is not too long after I told you that I was too cheap to pay $10 a month for github copilot and I came up with the idea for Severus, my own AI pair programmer. It was something that went boom. My blog usually doesn’t have more than 20 or 30 viewers a day (at its best) and for almost a week, I had more than 200 views per day. Since people showed interest in yet another AI pair programmer, I have decided to continue working on severus, more seriously.

Severus code generation
Severus is now capable of being accessed as an API

My plans for Severus

So in this article, I may discuss a bunch of problems I may face in the long path of creating Severus and making it available as an end-user software. There are some serious concerns, for example when I talked about the idea of Severus with one of my colleagues, he told me he is concerned about the confidential codes he has written.

Almost all of your concerns are valid (except the one who thinks this whole process is handled by the Illuminati) and those are my concerns as well. The next problem I may face is for the scaling, so I perhaps need to hire a well-educated DevOps engineer.

In this section, I explain all of my serious concerns and needs, and I expect some help from you, the kind readers of the article.

The Community

Creating a community around something which is honestly a weekend project, doesn’t seem like a good idea. You may say this thing happened for the Linux kernel as well. You’re right, but this is a little bit different. There are tons of tools which may work much better than Severus.

Also, it is important to know the place for creating the community. A subreddit? A discord server? A room on Matrix? An internet forum? I have no idea honestly.

So this is the biggest concern for me. The community!

Performance and text-generation glitches

The performance is good, thanks to huggingface inference API. Actually, knowing the fact that huggingface API exists, helped me with the implementation. But I still have some concerns here.

My main concern is that BLOOM starts generating some text which is not or cannot be classified as code. I tried different ways to get better results, but I still need some ways to verify the generated result is code and it’s not a text which includes the code. And this is really the hard part I guess.

For this purpose, I may need some help. Validation must be done on the results in order to get a good AI pair programmer, otherwise it’ll become more like an annoying colleague or an intern who knows something, but can’t gather his/her mind.

The Product

And final concern/plan is the product. For current use, I only have a simple application which runs on port 5000 on my laptop. Nothing more. There is no authentication and no user validation system, no monitoring, no scaling, no infrastructure. Basically a MacBook Pro which runs tons of programs daily and severus is currently one of them.

I had a VS Code extension in mind, also I thought of a web app as the MVP, when you can easily copy your code and then use it in your very own projects (and of course it won’t be the best choice for a confidential piece of code).

Although I have ideas in mind, I still need more brainstorming about how this project should be delivered to you as a product.

Conclusion

I still have a lot to do with this project. There might be some language detection to detect if the generated output is the code or not, and also there might be some more code validation to avoid mixing different programming languages.

Overall, this is one of the most difficult and at the same time the funnest projects I’ve ever done. I won’t give up on this, even if it seems like a painful and expensive hobby to people around me 🙂

 

I was too cheap to pay $10 a month for copilot, so I made my own

In mid 2021, there was a revolution in coding. As a lazy programmer who always needed a fast and smart assistant, I was really happy to have Github Copilot in my arsenal of coding tools. So I was one of the early adapters of the whole idea of AI pair programmer.

Everything was fine with Copilot. I wrote tens of thousands of lines of code in last year and I could code a lot of projects which were impossible with a good, smart and fast pair programmers, but everything has been changed since last week I got an email from github, telling me I can’t have free access to Copilot anymore.

It was a sad moment in my life, but I had different ways of adapting and accepting the reality. First, I was thinking of paying $10 a month for a github premium account, but since I won’t use most of github’s premium options, it wasn’t a suitable solution for me. I also checked tabnine or kite as well, and those didn’t work out for me, as well.

My own copilot!

Say hello to Severus, my new AI pair programmer!

First, let me talk about the name a little bit. I was watching Harry Potter franchise recently, and my favorite character in whole franchise is non other than Severus Snape. So I named my AI pair programmer after him. But I know you might be curious about how I made it. So let’s find out!

The language model

First, I needed a language model which could be capable of generating code. At first, I had OpenAI’s GPT-3 in my mind but I remembered that for some reasons, I can’t use it. Then, I fell for free language models. I used GPT-J and although it could understand the code, it didn’t seem a very high-accuracy model to me.

Then, I realized that Meta has released OPT-175B model. I put some of its functionalities to the test. It is a really perfect language model, but it works well when you use it as a core for a chatbot or a blog-post generator (or maybe a prompt engineering tool for Text-To-Image models) but not a great code generator.

Then, I found my saving angel. A lot of open-source engineers and enthusiasts of the world and it’s non other than BigScience’s BLOOM.

Code tests and inference

Like what most of you may have done, first I tried to complete a love story with the model. It was cool. Then I tried to create a friendly, a helpful, an idiot and an evil chatbot with the model. All worked out perfectly. Back then, I did not have any limitations to Copilot, so I didn’t care about the code generation.

When I found out myself in misery of not having my beloved AI pair programmer, I tried some basic python code generation with BLOOM. It was fine, then I have tested PHP, Ruby and JavaScript as well. I found that it works pretty well, so I have decided to write a simple inference code over the API.

Code generation may go wrong

Since I didn’t fine-tune the model (and I don’t have resources to) it may glitch sometimes. For example, when you don’t really pay attention to your code formatting, it might generate explanation of the code.

For me, what happened was that it started explaining the code in a tutorial format (and I bet the whole python codes were from towardsdatascience website since it had pretty similar literature).

In general, I may need a solution for this, as well.

Will it be open source?

Yes. At least it’ll be partly open sourced in near future. But more than being open source, it will be free (as in non-paid) and I guess it may be a pro for the tool. I haven’t even paid a single penny on the model, so why should I make you pay for it? By the way I will be open for donations and technical helps from the community.

Future Plans

  • The API
  • VSCode extension
  • A community website (or discord server)

Conclusion

At the end, it seems we have a lot to do with these brand new language models. I found my way to create a free, reliable and smart AI pair programmer and of course I need some help in this way.

I have to warmly thank you for the time you’ve spent to read my article, and I openly accept your comments and ideas.

Revolutionizing Interior Design With Artificial Intelligence

Considering you have an interior design/interior architecture project (or even company) and you want to go a bit (or a lot) further in your industry, right? What will you think of, first? If you ask me, I personally may tell you that your answer is augmented reality and since I’m a co-founder of an AR company (link) it makes the most sense.

But let’s be a little bit more thirsty for pioneering the Interior design industry. We all know these days, AI Art is becoming somehow the new wave of art and you’re probably trying to get access to Dall-E 2 or Midjourney beta programs. They’re cool, but they are not enough. Let’s talk about the model I’ve been working on.

interior design with ai

The idea

The idea of developing an AI which can paint, isn’t a new idea for me. I’m a big art enthusiast and I play guitars and compose music. But I never spent time to learn how to paint like painters I like (e.g. Salvador Dali). So in the last winter, I decided to put all my computer knowledge to develop models (or to be more accurate, software) that can paint for me.

First, I went after VQGAN and developed on top of that. It was cool and artistic but to my taste, it was to “machine-y”. You may think it’s the point, but for me, it wasn’t.

Later, I found more and better models and developed much better software. Today, I got very surprised about the results. I was working on something, but I also created prompts first, just for fun then I got the image I posted above! I just asked for an abstract painting as the wallpaper of a living room, and it created this realistic looking living room for me!

More interesting designs

Well, first I wanted to work on a modern minimal living room so I prompted it, and these two images are my results:

It’s great, isn’t it? It was completely what I’ve been looking for. I never could get better results for interior design.

Now, let’s talk about this! why this matters? why it will revolutionize the industry? why you need it alongside AR/VR solutions? So let’s discuss!

The importance of AI generated Interior Design

Fine, this part isn’t as interesting as the other parts of this blog post. Probably because I’m going to talk about things which are not an average midjourney tester’s concerns.

That is okay, you still can go to midjourney’s channel and look for how will it look like for Shrek being the next US president or something like that. Here we’ll discuss about importance of AI in interior design.

  •  Aesthetic: This is very important. At least in my opinion. It has two minds involved. The customer’s and the architect’s (or designer?). For example, I myself am a big fan of Salvador Dali (I’m not joking, I really love Dali) and I look for an interior designer who has a taste in surrealism. But there’s no guarantee that we can agree on a design any time soon. So AI can help us find our desired design much faster and easier.
  • Reducing the cost: Sometimes you may pay different designers/architects to get your desired designs. It can get costy, specially if you want to get help from famous architects. So I guess you prefer to have an overview of your desired design and give it to the architect.
  • Diversity in choice: These days we have diversity in pretty much everything we want. So why not our interior designs? You can get as many designs as you need then choose one from them. It’s a win-win game!
  • Getting unimaginable designs: Okay, now it’s time for midjourney fans to come back. Have you ever seen AI generated art? Most of them are other-worldly good! And they obviously can be used in different aspects of interior design.

More designs

Above designs are from the prompt interior of a modern office, with pop art as wallpaper pattern, blue color scheme for the furniture and as you can see it almost covered all I asked!

I know, there are some minor problems with pop art in these pictures, but we should remember a machine designed this. It means with a little bit of more training, it can get much much better at generating images which can be inspiring for interior designers.

Conclusion

Before going any further, I should say this is my acutely personal opinion and I probably will make money by promoting my AI, so if you have any other opinions, it’s %100 welcome in the comment section.

In conclusion, I have to say it’s 2022. We have the greatest AI engines which can generate text, images and even music and currently most of them are becoming toys for curious teenagers and this is not totally good.

We can use these potentials in different segments of different industries and make our world a much better place. The world won’t become like The Matrix franchise (and ofc, I love that franchise) specially when we learn how to use machines to improve our works.

So final conclusion is that we obviously need an intelligent solution for interior design. Since it can reduce our costs, makes our procedures faster and diverse our choices.

At the end, I’m going to invite you read my Persian blog as well (if you speak or understand that language) because I write more frequently there.

Analyzing components of an electric circuit with YOLOv5

In past recent weeks, I did a lot with YOLOv5. A few weeks prior to this article, I wrote an article on why I love YOLOv5 and later, I did a project with YOLOv5 which was somehow a try for making something like symbolab or similar software.

I explained that project in details in my Persian blog (link) and I may write an English article on that project soon. But in this article specifically I am going to explain about a newly done project of mine!

Electric Circuit component analysis using YOLOv5

Introduction

After making the math equation OCR I got a few ideas in my head about doing identical projects but in different scopes and areas of my interest. Believe it or not, I am not really the type of person who sticks to only one thing and I tried to many different things in my life. As my job is making computer software and platforms, I have decided to use the knowledge I have in this field to improve my performance in the other fields as well.

I have studied Computer Hardware Engineering in the university and I know a thing or two about electronics. I have never been an electrician or an electronics expert but I have made some cool gear using Arduino, Raspberry Pi and even basic electronic components. I also am a big fan of YouTuber electroboom and like what he does a lot!

So this is the reason I started this project. I decided to make a computer vision program which helps us understand the components in a schematics and in this article, I will explain how I did it.

Who’s the audience of this article?

Since I am not a type of content creator or writer who bombards the audience with complex math and physics (or computer science) concepts, I have to say everyone.

But for being more specific, I have to say that everyone who’s enthusiastic about artificial intelligence, computer vision and electronics and is able to read English is my audience. At least in this particular article. Also if you are a newbie who wants to find their own path in the vast universe of computer science, this article will give you an idea about computer vision projects combined with deep learning.

Nikola Tesla

Previously done works

Although I didn’t want this article to be a thesis/research paper, I had to put this in the article. Honestly, I haven’t search about what people may have done with YOLOv5 (or other tools) to analyze electric or electronic circuitry.

I’m sure there are other minds out there who had thoughts of this and I appreciate their thoughts and also their efforts.

The research procedure

The problem

We have tons of circuit schematics in books or notes which students or enthusiasts can’t understand very well. Unlike math or physics formulas, there is no application or tool to find out what schematic represents what component therefore we need some tool to understand our circuits better.

The possible ways of implementation

  1.  Using OpenCV functions such as contouring and similar stuff to detect which shape is which.
  2. Using a pre-trained model for electrical components.
  3. Developing a CNN or similar network to detect the components.
  4. Fine-tuning YOLOv5 to our need.

Each of these ways, had their own problems. In the following lines, we’ll find out why most of them were inefficient for me.

Using OpenCV functions, although it’s first go-to for most of computer vision programmers but it is really problematic specially when you get pictures which are very close to each other. This is an example of my input data:

Example of input data

and as you can see, I have a battery in series with a capacitor and even to human eyes, these two can be mistaken! And remember, OpenCV doesn’t do magic and it is only a great tool for processing images.

The next way was to Find a pre-trained deep learning model which has the data of the components. It is a nice idea but it also has its own problems. For example, I had no idea which network is used, which libraries are used, etc. Also there is no mediapipe for electric circuitry where you are sure about its functionality in your projects.

Third way was my second favorite by far. Developing our own CNN or identical network for object detection or localization. It is cool, it can be efficient but the amount of work I had to put on it was actually out of my range of tolerance. Specially since I’m not doing these projects for graduation or money, I did not want to put too much effort on my project.

And last but not the least, Fine-tuning YOLOv5 for my needs, was the best solution I could ever think of. YOLOv5 is one of the best tools for quickly implementing your computer vision plus deep learning ideas. It also is a very very accurate and fast tool. So I went with this one.

Data gathering and preparation

YOLOv5 requires a set of labeled images. It means we need to have images of our topic of interest and nothing more.

Nicholas Renotte explains how to get data or images you need in this video. So if you want to do a similar project, I suggest giving that video a watch. But in my case, things were a little different.

I needed tons of schematics and on the other hand, I didn’t really want to spend a very long time labeling and preparing the data. So I have decided to draw a couple of schematics on a piece of A4 paper like this:

Example of my data

and for preparation, I just took photos of these drawings using my phone (Xiaomi Redmi Note 8 Pro) and then moved them to my computer.

For slicing them to small chunks of photos, I just used Adobe Photoshop (I know that might be surprising but I am too lazy to use any other tool) and then saved them in to a folder structure acceptable for YOLOv5.

The next part (which I always call the worst part of an A.I/Data project) was cleaning up the data and then labeling it. I used leabelImg in order to label my images since it has provided a YOLO type of labeling system.

Training YOLOv5

After doing all the hard stuff the time to train our beloved YOLOv5 arrived. Training YOLOv5 is fairly easy! You just have to follow their guide provided in their github repository to train your own version of YOLOv5.

Since the process of training YOLOv5 is easy and well-documented, I don’t really spend so much time explaining the process here. I only point out what I have done in order to get the best results.

I used 416×416 image sizes (if you’re not familiar with YOLOv5, you must know that their training script resizes the images) and a batch size of 32.

At the beginning I used their base weights (which is trained on COCO dataset) called yolov5s which stands for Small YOLOv5 and apparently, it has 7.2 million parameters (according to this table) and it wasn’t really good after almost 200 epochs. So I did reset my training process with yolov5m which stands for Medium YOLOv5 which has 21.2 million parameters.

To be honest, I know the number of parameters isn’t the only thing that matters, but for the love of God, let’s keep things simple.

Finally, with 416×416 images, batch size of 32, 500 epochs and medium model and almost five hours of waiting (since I was doing this process on my Macbook Pro and not in Google Colab), I got my desired results.

The result

The final result

As you can see, I got pretty good confidence levels on my components. Unfortunately, confidence levels for those inductors isn’t fit in the picture so for a better understanding of this resulting photo, I put this table here as well:

Confidence levels and coordinations

Future works

After finishing this project I’ve got a few ideas in my head. The very first thing is to generate a net list for a SPICE software. Imagine if you can draw a circuit on paper (Most of us engineers usually use paper to do our initial designs, right?) and then take a photo of it and boom! you have it in your SPICE software.

The second thing coming to my mind is actually combining this with an OCR software which can understand numbers and units we’ve used in our electrical circuitry. For example understands that 200K besides a resistor, means the resistor has 200 kilo ohms of electrical resistance.

Then, we can apply all these data to some calculator which can help us have a better understanding of our designs and gives us information about the behavior of our circuit in different situations such as changes in current, voltage or frequency.

Conclusion

In conclusion, I believe every kind of OCR can be helpful in our lives. I remember when I was a child there was some sort of pen-like device which could read verses of Quran and I liked the whole idea.

Later when I got older I decided to find out how that magical pen works and can we improve that? Yes Quran is very important for Muslim people and there is no doubt of that but that wasn’t enough in my opinion since that device could be used by visually impaired people. They could use that pen to understand Quran and other types of texts as well.

And now, I have the knowledge to make the world a better place to use the technology to people’s advantage. After making a real-time sign language translation program with A.I, I have decided to just conquer another realms of computer vision as well.

Lastly I have to say there is a very vast world of the unknown we can easily uncover using our knowledge and I try my best to do that.

Regards.