How to be a hardware engineer

A computer, is made up of hardware and software. Lots of people like to write and develop software, so the internet is full of topics like “How to be a software engineer” or “how to write a computer program” and also, there is a lot of great and free tutorials on programming languages and frameworks. But, hardware engineering guides on the internet, are not as many as software’s.

In this post, I really want to write about “How to” be “a hardware engineer”. As it’s my favorite field in computer science and I study it in the university. So, if you want to be a computer hardware engineer, just read this and start searching about your favorites.

Definition

A computer engineer in general, is an engineer who designs, develops and tests computers. Both hardware and software. But, there are different types of “Computer engineer”. For example, I am a hardware engineer, my friend is software engineer, my other friend is network engineer, etc.

So, a hardware engineer is a computer engineer, who knows electronic and electrical engineering (I know, some of electronics/electrical engineering courses are taught to students of software engineering). But, if we go further, we’ll find that the “Hardware Engineer” is who “combines computer science and electronics”. So, this is what he/she does, learns computer science and electronics, then combines them and makes computer hardware.

Expertise and Fields

There are these fields in hardware engineering, and people can learn one of them to find a job or start their own business.

  1. Computer architecture :
    a computer architect, is a computer scientist who knows how to design logical level of a computer. In my opinion it’s the most-needed role in a team who design a new computer. When you study computer architecture, you learn mathematics behind the computer hardware, and also you learn how to design different logical circuits, and connect them together and make a computer at the logical level.
    So, if you love mathematics, and love computer programming and in general, computer science, this is your field. to learn more, you also can read my book (link)
  2. Digital Electronics :
    This one, is actually my favorite. Digital electronics is about implementation of logical designs. So, with digital electronics, you can make what you have designed using logic and mathematics.
    But for this field of hardware engineering, you need to know electronics and electrical engineering. So, if you like digital electronics, start studies on electrical engineering today!
  3. Digital Design :
    This part, is also my favorite, and I really like it more than digital electronics. It’s about simulation and synthesis, so you can really ‘feel’ the hardware you have designed.
    But how is it possible? The “Digital Designer” is a person who’s familiar with simulation and synthesis tools. Such as hSpice, LTSpice, ModelSim and languages such as Verilog or VHDL. It’s about design and programming, so if you’re a software engineer who wants to learn hardware engineering, this field is yours! But, usually hardware engineers learn this field after learning digital electronics.
    If you like to be a digital designer, start improving your programming and algorithm design today, it’s important to know how to code, and how to write an algorithm.
  4. Microcontrollers and Programmable devices :
    This field is not actually “Hardware Engineering”. But, it’s still dealing with hardware. Actually, it’s not “design and implementation” of the hardware, it’s “making hardware usable”. When you program an ATMega32 chip for example, you make a piece of silicon usable for driving motors and sensors.
  5. Artificial Intelligence :
    And finally, the common field! Artificial intelligence is the widest field in computer science. It’s applicable and functional in all fields of computer science.
    But how we can use it in hardware engineering? We can use AI to improve our hardware, for example our architectures can be analyzed by AI. We can make a hardware which has AI to solve problems (a.g. ZISC processors). And finally, we can use AI in Signal processing, which is usually referred as a field of hardware/electronics/audio engineering.

Start point

I tried to explain start points in previous part, but now, I explain general start point. To learn hardware engineering, you need to know discrete mathematics and logic. Also, you need to know how to show boolean functions like electrical circuits (a.k.a logical circuits). After learning these, you need to get familiar with computer architecture, and at the end, you can choose your favorite field. I have experiences in all fields (except AI and Signal processing) but my most favorites are digital design and digital electronics.

It all depends on you, to choose which field, but you need those basics and fundamentals to understand it. Also, if you improve your knowledge about computer science in general, you will be a successful computer engineer!

Good Luck 🙂

 

Reverse engineering of 8086, from a calculator to the most used processor

If you have a laptop or desktop computer, you probably use a 8086-based CPU, or one implementation of x86 family to be exact. For example, I have a Lenovo laptop with a Core i5 CPU, which is based on x86 architecture. In this article, I want to talk about x86 architecture, and to explain how it works, I just start with the simplest one : 8086.

8086, is probably the first general-purpose processor made by Intel. This is why it’s famous, and in a lot of cases, people prefer to use it or study it. Everything is well-documented and also there are billions of tutorials and examples on how to use it! For example, if you search for “Interfacing Circuits”, you will find a lot of 8086-based computers made by people, connected to interface devices such as monitors, keyboards or mice.

Before we start reverse engineering, and make our simple x86-compatible computer, let’s take a look on the machine code structure of 8086. In this case, we just review Register addressing mode, because this mode is easier to understand or re-implement.

In this case, we can only take a look on a two-byte (or 16 bit) instruction code. Our instruction code looks like this :

Byte 1 Byte 0
|Opcode|D|W| |MOD|REG|R/M|

What are these? and why we should learn this? As we want to reverse engineer 8086 architecture and learn how it works, we need to know how this processor can understand programs! So, let’s check what are those fields in these bytes :

  • Opcode : a 6-bit number, which determines about operation (for example ADD, SUB, MOV, etc. )
  • D : Determines source or destination operand. To make reverse engineering process simple, we consider that as constant 1. So, REG field in byte 0 is always destination.
  • W : Determines data size, and like D, to make reverse engineering process simple, we consider it as a constant 1. So, we only can do operations on 16 bit numbers.
  • MOD : Determines mode, as we decided before, we only model the register addressing mode, so we need to consider mode as constant 11.
  • REG and R/M : REG shows us source, R/M shows us destination. Please pay attention, we made this special case because we are going to model register addressing mode. For other modes, we can’t consider R/M as destination.

Now, we learned how 8086 can understand programs, for now, we have some instruction code like this :

Opcode D W MOD REG R/M
xxxxxx 1 1 11 xxx xxx

Let’s assign codes to our registers. As we decided to simplify our reverse engineering process, and also we decided to use only 16 bit registers, I prefer to model four main registers, AX, BX, CX and DX. This table shows us codes :

Code Register
000 AX
001 CX
010 DX
011 BX

As you can see, now we are able to convert instructions to machine code. To make reverse engineering process even simpler, we can ignore “MOV”, but I prefer to include MOV in my list. So, let’s find opcodes for MOV, ADD and SUB.

Opcode Operation
100010 MOV
000001 ADD
010101 SUB

Now, we can convert every 8086 assembly instruction using the format we have. For example this piece of code :

MOV AX, CX 
MOV BX, DX
ADD AX, BX

Now, if we want to convert this piece of code to machine code, we have to use the tables we made. Now, I can tell you these codes will be :

1000101111000001
=> 0x8bc1

1000101111011010
=> 0x8bda

0000011111000011
=> 0x07c3

Now, we can model our very simple x86-based computer. But a note to mention, this has a lot of bugs! For example, we can’t initialize registers, so we need to study and implement immediate addressing mode, also, we can’t read anything from memory (x86 is not a load/store architecture, and it lets people work with data stored in memory directly!). But I think, this can be helpful if you want to study this popular processor or do any projects with it.

How to make a computer program

This is a cliché in IT and computer related blogs. You can find at least one topic on How to make a computer program in every blog written by a computer expert (scientist, engineer or experimental expert). So, I also decided to write about it. In this topic, I’m going to explain how your idea can be a program.

I’m not a startup or business person, and I hate when someone wants to teach other people how to have an idea so I consider you already have an idea, and you want to implement your idea as a computer program. Let’s start!

Choose your target hardware

Unfortunately, a lot of programmers ignore this important step, but if you consider a special hardware to develop and implement your ideas, you will have two points :

  1. You learn a new hardware architecture (and maybe organization)
  2. You help someone who wants a specific application on that hardware.

Sometimes, you realize that writing a calculator is pretty stupid. Of course it is when you write a calculator for Windows or macOS. But, when you write a calculator for Arduino, which can interact with a keypad and displays, it’s not.

Write Algorithm

Actually, algorithm is explaining the way we solve the problems. So, we need to write the steps of our solution and test it. Sometimes, when we write a simple algorithm, it’s not efficient at all, and needs a lot of improvements. Imagine this (This algorithm makes all even numbers lesser than 100:

while( a < 100){
 if(a%2 == 0){
  puts(a);
 }
 a++;
}

This is a piece of larger code. But wait,  how can we improve that? That if there can make this piece of code slower. But, if we consider a = 0, we can write something like this :

for(int a = 0; a < 100; a + 2){
 puts(a);
}

You know, I wrote a shorter code here. Also, this short code has a better structure of making all even numbers lesser than 100.  But I think both of these codes have the same time complexity, so there is no difference. If we had two nested loops, and the inner loop’s condition had effects on the outer loop’s condition, we had to spend time on calculating time complexity and optimizing it.

Finally, you will realize there are some “classic” algorithms, which are already optimized, and you can just use them, and model your idea with them.

Choose the language

This step is also one of the most important ones. Imagine if you want to program an AVR chip, of course JavaScript is not the best choice. There are tools which allow you to write programs for those chips in JS, but the language is not made for communication with AVR! But, when you want to program a website, specially when you’re dealing with front-end stuff, C is not your best choice! But wait, if the program we want to make is a general purpose desktop program or a school/university project, we are actually free to choose the language!

Imagine we want to write a simple program, which does Addition with bitwise operations. We can write our program in C/C++ like this :

#include <stdio.h>

int bitwiseAdd(int x, int y){
 while(y != 0) {
  int carry = x & y;
  x = x ^ y;
  y = carry << 1; 
 }
 return x;
}

int main(){
 printf("%d\n", bitwiseAdd(10 , 5));
 return 0;
}

And you can write it in Ruby like this :

def bitwiseAdd(x, y)
 while y != 0
  carry = x & y
  x = x ^ y
  y = carry << 1
 end
 return x
end

puts bitwiseAdd(10, 15)

But, when you want to directly communicate with hardware, you’ll need a low-level language. C/C++ are actually mid-level languages. They can help you communicate with hardware (like this piece of AVR code) :

while(1){
 PORTC.1 = 0;
 delay_ms(1000);
 PORTC.1 = 1;
 delay_ms(1000);
}

or like that bitwiseAdd(x, y) function , they can help us write normal programs. But Assembly language is a really low-level language. We can use it when we need to talk to our hardware directly.

You see, all programming languages can help us, but depending on the conditions, we can use different languages.

And …?

And now, you probably know how a computer program is made. But, if you really want to become a developer, you have to study about paradigms, methodologies, etc. I tried to keep it simple in this article, but later, I’ll write about those topics more.

Microcontrollers, Design and Implementation released!

It was about two years I started serious study on computer architecture. In these years, I learned a lot and I could simulate and implement a microprocessor, similar to real ones. In Summer 2016, I decided to share my experience with others. Then, I started writing this book. This book has seventeen chapters, and after reading this book, you will have a concept of computer architecture.

Chapters

  • License – Licensing and Copyrights
  • Introduction – A quick review of the book, defining target audience of the book.
  • Chapter 1 : What’s a microcontroller? – This chapter, defines a microcontroller. After reading this chapter you’ll understand the internal parts of a microcontroller. It’s completely theory, but you need the concepts.
  • Chapter 2 : How to talk to computer? – In this chapter, we have a quick view on programming and then, machine language. We determine the word size of our processor in this chapter.
  • Chapter 3 : Arithmetic Operations – This chapter focuses on arithmetic operations in base 2.
  • Chapter 4 : Logical Operations – This is all about boolean algebra, the very basic introduction to logical circuits.
  • Chapter 5 : Logical Circuits – Our journey starts here, we learn how to make logics using NAND in this chapter, and then, we learn the logic gates.
  • Chapter 6 : Combinational Circuits – This chapter is where you learn how to combine simple logics together and make more complex logics. Actually, you learn how to implement Exclusive OR and Exclusive NOR using other gates.
  • Chapter 7 : The First Computer – In this chapter, we make a simple Addition Machine.
  • Chapter 8 : Memory – In this chapter, we just take a look on sequential circuits.
  • Chapter 9 : Register File – After we learned sequential circuits, we make registers and then, we make our register file.
  • Chapter 10 – Computer Architecture – In this chapter, we’ll learn theory and basics of computer architecture and organization .
  • Chapter 11 – Design, Advanced Addition Machine – In this chapter, we add memory blocks to our addition machine.
  • Chapter 12 – The Computer (Theory) – In this chapter, we decide about what our computer should do. Actually, we design a simple ISA.
  • Chapter 13 – Arithmetic and Logical Unit – Now, it’s time to design our ALU.
  • Chapter 14 – Program Structure – In this chapter we decide about programming and machine language, and we design a simple instruction code.
  • Chapter 15 – Microcontroller – And finally, we add the RAM to our ALU, and we’ll have our simple microcontroller.
  • Chapter 16 – Programming and Operating System – In this chapter, we actually talk about the software layer of computers.
  • Chapter 17 – The Dark Side of The Moon – The final chapter, is all about making real hardware, we take a look at transistors, integrated circuits and HDL’s here.

Link to PDF File : Download