The future of content is AI

I personally never counted myself as a content creator but apparently, I always have been counted as one. Why you may ask? The answer is easy. I have a habit of filming my work, writing blog posts (mostly in Persian), posting my work and code on twitter and stuff. All of these are behaviors from a content creator.

My content on the other hand were mostly about me, I never cared about making those type of advertisement reports (where you have to care a lot about SEO, back-links and stuff) because it wasn’t my job to create the content. Now, I am thinking about it, but my own way.

The history of content creation

Before going deep about this, let’s clear something. This part of the article is from my own point of view and it’s not a certain history, but at least, this is how I saw content creation and how it works.

One-way content generation

Let’s go back a lot. I mean A LOT! Maybe in 2006, you opened a URL in your Internet Explorer and then find out a very ugly static website written in pure HTML. Some of those websites also had some annoying JS functions (we should be grateful about the modern use of JS, there are no mouse pointer following figures or rain in background anymore!).

This is an example of a one way form of content. The content you can not react to as is. You had to find an email address in Contact Us page, or fill their forms and usually they did not ever viewed the respective inbox. So you couldn’t help them improve their content or right their wrongs.

Here comes the blog

I almost was 12 when I discovered the concept of blogs, and I also started writing in a free blogging service (which is very popular among Iranian community and you can find it here) and it was amazing.

The whole greatness of blogging was that it wasn’t “one-way” and people could interact with each other using comments and at the same time, chatrooms were also pretty popular. So we usually had a good time with our internet pals those days. And you know what does that mean?

User generated content (UGC) matters!

It really does. Imagine you want to get a new hair dryer. So what do you do? I guess you go to amazon and search for hair dryers. A hair dryer is not an object you buy once a week, so you need to know that the hair dryer in question lasts enough or not, how much power does it take and does it meet health guidelines and regulations for a product like that?

You just read the description, specifications and other details provided by the seller on Amazon. It’s good, but not great. You have an idea about the product, but you don’t know how is its user experience. What can we do about this? Easy, we scroll down to the user reviews. Were people rated and described their feelings about the product.

In the reviews section, you find out this product doesn’t last that much, you even may search in other platforms about the very same product and find out what is wrong with the product in question. For me, the second platform is always YouTube. People do a lot of good product reviews on YouTube (even those who got sponsored by the brand we’re looking for, are usually helpful as well!) and guess what? YouTube is also a platform for UGC!

But what, it doesn’t end here. You read this post but you still are confused about the title. I have to say this is where the actual fun begins!

The future of content is AI!

Now this is the part you were waiting for, in this section, I’m going to talk about how AI can help us create better content because recently, I follow the trend of AI art a lot! I also coded and developed some AI Art tools myself! I also was too cheap to get copilot paid membership, and created my own version. See? I officially joined the army of content creators, but in my very own way.

Sentiment Analysis

I guess this one is not really about content creation but more about content moderation. But moderation is as important as creation (if not more) and I had to put it here. Having a sentiment analysis system on our user generated content, can help us find if the product has poor quality or how toxic our community is or something like this.

To be honest it helps us more than it seems. It helps us make a better community (pretty much by banning suspicious users) and also give feedback to our suppliers who sent us products with poor quality. It doesn’t end here by the way, my example is still about a retail store and not a general website.

In the modern day, you have to watch your tongue more than before. A lot of people stood for their rights and the typical words of your daily speech can be offensive to other people. So in this particular case I believe these analytic tools can help us improve even in our personal lives by having a better community.

We talked enough about content moderation using AI, let’s go to the fun and interesting topic of content generation!

The rise of AI art generators

AI art is basically an empire now. AI art generators such as Dall-E 2 and Midjourney (you probably would like to take a look at my open source version of midjourney, OpenJourney, just saying) are very popular and in the other hand, Stable Diffusion (and forks) are really growing in the open source side as well.

You cannot deny the fact that these are pretty cool tools of content creation. These tools can help us bring our ideas to life in forms of art, 3D design, interior design, UI/UX and a lot more. So we have to talk about these, we have to recognize these images as the new content people create and enjoy!

It does not end here as well. There is also a new trend of Text To Music which means a lot of music creators (me included!) may use AI to create music as well. This is the beauty of AI content creation.

And finally, everyone offers AI these days.

Yes, every company which had an even small relation to content creation, offers AI! We expect big names of our industry such as Google or Meta provide tons of AI tools such as libraries, frameworks, models, datasets and even programming languages. But do you know what amazed me recently?

Notion also provides AI solutions for productivity and ideas! You basically can have some sort of copilot for your content calendar or even better (in case of some people worse) an ai companion for task management and I think this is great.

Now we have tools to create text, images, videos and sounds, what should be our next step? I guess we have to read minds (and I’ll write an article about that as soon as possible).

Conclusion

Now let’s conclude (I know, I have this section on every blog post and I don’t put anything useful here). We just found out where we have started the age of digital content creation. Internet had a great role in revolutionizing this age and opened new doors of opportunity for us, people who usually couldn’t get the chance of writing in a magazine or newspaper easily. These days we write on Twitter (at least until we can write without paying Elon Musk for that!) and it needs no privilege. It only requires an internet connection.

So AI can help us improve our content, it can help us write better reviews, it can help us turn a bunch of photographs into a full report. You just input your photos, the image-to-text pipeline starts and extract details of each photo, then you edit them and now you have your reports.

In my opinion, AI is there to help us make the world a better place. Because it provides us an equal chance of being author, artist, musician and anything which required some level of privilege in the past.

Severus does the magic

It is not too long after I told you that I was too cheap to pay $10 a month for github copilot and I came up with the idea for Severus, my own AI pair programmer. It was something that went boom. My blog usually doesn’t have more than 20 or 30 viewers a day (at its best) and for almost a week, I had more than 200 views per day. Since people showed interest in yet another AI pair programmer, I have decided to continue working on severus, more seriously.

Severus code generation
Severus is now capable of being accessed as an API

My plans for Severus

So in this article, I may discuss a bunch of problems I may face in the long path of creating Severus and making it available as an end-user software. There are some serious concerns, for example when I talked about the idea of Severus with one of my colleagues, he told me he is concerned about the confidential codes he has written.

Almost all of your concerns are valid (except the one who thinks this whole process is handled by the Illuminati) and those are my concerns as well. The next problem I may face is for the scaling, so I perhaps need to hire a well-educated DevOps engineer.

In this section, I explain all of my serious concerns and needs, and I expect some help from you, the kind readers of the article.

The Community

Creating a community around something which is honestly a weekend project, doesn’t seem like a good idea. You may say this thing happened for the Linux kernel as well. You’re right, but this is a little bit different. There are tons of tools which may work much better than Severus.

Also, it is important to know the place for creating the community. A subreddit? A discord server? A room on Matrix? An internet forum? I have no idea honestly.

So this is the biggest concern for me. The community!

Performance and text-generation glitches

The performance is good, thanks to huggingface inference API. Actually, knowing the fact that huggingface API exists, helped me with the implementation. But I still have some concerns here.

My main concern is that BLOOM starts generating some text which is not or cannot be classified as code. I tried different ways to get better results, but I still need some ways to verify the generated result is code and it’s not a text which includes the code. And this is really the hard part I guess.

For this purpose, I may need some help. Validation must be done on the results in order to get a good AI pair programmer, otherwise it’ll become more like an annoying colleague or an intern who knows something, but can’t gather his/her mind.

The Product

And final concern/plan is the product. For current use, I only have a simple application which runs on port 5000 on my laptop. Nothing more. There is no authentication and no user validation system, no monitoring, no scaling, no infrastructure. Basically a MacBook Pro which runs tons of programs daily and severus is currently one of them.

I had a VS Code extension in mind, also I thought of a web app as the MVP, when you can easily copy your code and then use it in your very own projects (and of course it won’t be the best choice for a confidential piece of code).

Although I have ideas in mind, I still need more brainstorming about how this project should be delivered to you as a product.

Conclusion

I still have a lot to do with this project. There might be some language detection to detect if the generated output is the code or not, and also there might be some more code validation to avoid mixing different programming languages.

Overall, this is one of the most difficult and at the same time the funnest projects I’ve ever done. I won’t give up on this, even if it seems like a painful and expensive hobby to people around me 🙂

 

A to Z of making an intelligent voice assistant

It was 2011, a sad year for a lot of apple fans (me included) because Steve Jobs, one of original co-founders of Apple Computers died October that year. Also, it could become sadder if there was no iPhone 4S and its features that year.

A few years prior to the first introduction of Siri (which introduced with iPhone 4S), a movie called Iron Man came out from Marvel Studios. Unlike comic books, Jarvis wasn’t an old man in this movie. Jarvis was an A.I. I’m not sure if the movie inspired companies to add the voice assistant to their systems or not, but I’m sure a lot of people just bought those phones or tablets to have their own version of Jarvis!

Long story short, a lot of engineers like me, were under the influence of the MCU (Marvel’s cinematic universe) and Apple and wanted to have their voice assistant a little bit differently! Instead of buying an iPhone 4S, we preferred to start making our own voice assistants.

In this article, I’m discussing the basics you need to learn for making your very own version of Siri. I warn you here, there wil be no codes at least in this one!

How does a voice assistant work?

In order to make something, we first need to learn how on earth that thing works! So, let’s discuss about voice assistants and how they work. They’re much simpler than what you think. It’s guaranteed your mind will be blown by their simplicity!

  • Listening: a voice assistant, as called, needs to listen to the voices and detects what is a decent human voice. For this, we need speech recognition systems. These systems will be discussed further. We just can make one, or we can use one that’s already made.
  • Understanding: In the 2015 movie Avengers: Age of Ultron, Tony Stark (a.k.a Iron Man) says “Jarvis is only a natural language understanding matrix” not considering the matrix part, other part of this sentence makes sense to me. Voice assistants need to understand what we tell them. They can have A.I or hard coded answers or a little bit of both.
  • Responding: after processing what we’ve said, the voice assistant needs to provide the responses that fit our request. For example, you say “Hey Alexa, play music” and your Alexa device will ask you for the title, you say “Back in Black” and she’ll play the song from spotify or youtube music.

Now, we know about the functionality. What about the implementation? It’s a whole other story. The rest of the article, is more about the technical side of making an intelligent chatbot…

Implementation of a Voice Assistant

Speech Recognition

Before we start to make our voice assistant, we have to make sure it can hear. So we need to implement a simple speech recognition system.

Although it’s not really hard to implement a speech recognition system, I personally prefer to go with something which is already made, like Python’s speech recognition library (link). This library sends the audio signal directly to IBM, Microsoft or Google API’s and shows us the transcription of our talk.

In the other hand, we can make our own system with a dataset, which has tons of voices and their transcriptions. But as you may know, you need to make your data diverse af. Why? Let me explain it a little bit better.

When you have your own voice only, your dataset doesn’t have the decent diversity. If you add your girlfriend, sister, brother, co-workers, etc. You still have no diversity. The result may be decent, but it only limits itself to your own voice, or the voices of your family members and friends!

The second problem is that your very own speech recognition, can’t understand that much. Because your words and sentences might be limited to the movie dialogues or books you like. We need the diversity to be everywhere in our dataset.

Is there any solution to this problem? Yes. You can use something like Mozilla’s dataset (link) for your desired language and make a speech recognition system. These data provided by the people around the world and it’s as diverse as possible.

Natural Language Understanding

As I told you, a voice assistant should process what we tell her. The best way of processing is artificial intelligence but we also can do a hard coded proof-of-concept as well.

What does that mean? hard coding in programming means when we want some certain input to have a fixed output, we don’t rely on our logic for that answer, but we just write code like if the input is this, give the user that, with no regard of the logic. In this case, the logic can be A.I, but we tell the machine if user said Hi, you simply say Hi!

But in the real world applications we can’t just go with the A.I. or hard coded functions. A real voice assistant is usually a combination of both. How? When you ask your voice assistant for the price of bitcoin, it’s a hard coded function.

But when you just talk to your voice assistant she’ll may make some answers to you, which may have a human feel and that’s when A.I. comes in.

Responding

Although providing responses can be considered a part of the understanding process, I prefer to talk about the whole thing in a separate section.

A response is usually what the A.I. will tell us, and the question is how that A.I. knows what we mean? and this is an excellent question. Designing the intelligent part of the voice assistant or in general chatbots, is the trickiest part.

The main backbone of responses, is your intention. What is your chatbot for? Is it a college professor assistant or it’s just something that will give you a Stark feeling? Is it designed to flirt with lonely people or it’s designed to help the elderly? There are tons of questions you have to answer before designing your own assistant.

After you asked you those questions, you need to classify what people would say to your bot under different categories. These categories are called intents. Let me explain by example.

You go to a Cafe, the waiter gives you the menu and you see the menu, right? Your intention is now clear. You want some coffee. So, how you ask about coffee? I will say Sir, a cup of espresso please. And that’s this simple. In order to answer all coffee related questions, we need to consider different states, as much as possible. What if customer asks for Macchiato? What if they ask for Mocha? What if they ask for a cookie with their coffee? and this is where A.I. can help.

A.I. is nothing other than making predictions using math. A long time ago, I used to write the whole A.I. logic myself. But later a YouTuber called NeuralNine developed a library called neural intents and it’s for this purpose! How does this library work?

It’s simple. We give the library a bunch of questions and our desired answers. The model we train, can classify questions and then simply predict what category our sayings belong to. Let me show you the example.

When you say a cup of espresso please, the A.I. sees words cup and espresso. What happens then? she’ll know these words belong to the coffee category, so she’ll give you one of those fixed answers from that category.

Keeping answers fixed by the way, is not always a good thing. For some reasons, we may need to make a generative chatbot which also can make responses like a human. Those bots are more complex and require more resources, studies and time.

Final Thoughts

The world of programming is beautiful and vast. When it comes to A.I. it becomes more fun of course. In this article, I tried to explain how a voice assistant can be constructed but I actually didn’t dig deep to the implementation.

Why so? I guess implementation is good, but in most cases, like every other aspect of programming, it’s just putting together some tools. So learning the concept, is much more important in most cases, like this.

I hope the article was useful for you. If it is, please share it with your friends and leave a comment for me. I’d be super thankful.