alicanakca

Math Student at Izmir University of Economics. I’m working on Machine Learning and Data Science.alicanakca.com

Pixel Art's Past and Future

Many historical accumulations in civilizations have been fostered by the continual changes in the branches of art over the years. It is up to us, as observers of these art movements, to ensure that the acquired treasure is transferred to the future in the most effective manner possible. We will discuss the future of Pixel Art in this post by learning about Computer Art, which has emerged since the 1960s, when digitalization began.

Let's discuss about our primary subjects before we begin our post. First and foremost, we will give the approach a light touch by recognizing the history of the pixel, which is the building block of Pixel Art, from its inception. As a result, we'll be able to make a sound prediction concerning his work.

Etymology of Pixel

The word 'pixel' is formed from the use of 'pictures' with the abbreviation 'pics' and the 'el' complement found in words like 'voxel' and 'texel.' Because the smallest element in an image is considered a pixel, the 'hand' in the word pixel is the first two letters of the word element. In the headlines of the magazine 'Variety,' the way of reducing the word was shown by referring to the word 'image' . Furthermore, we can see that the phenomena known as pixel is expressed in various words prior to the publication of this magazine edition in 1934.

Instead of the word pixel, Alfred Dinsdale used mosaic of dots, mosaics containing selenium, and thousands of little squares in his essay in the Weirless World journal in 1927.

The term 'pixel' was used by Frederic C. Billingsley in 1965 to characterize the picture elements of photographs taken by space probes on JPL's space mission. Despite the fact that he claimed to have learnt the phrase from Keith E. McFarland, McFarland maintained that he had no idea where the 'pixel' combination came from and that the word 'pixel' was already in use at the time.

Let's look at the technology of pixels and their role in our lives now that we've briefly examined their historical context.

In simple words, a pixel is the smallest controllable element of an image portrayed on the screen in a digital display system. Color intensities vary between pixels. The colors red, green, and blue are used to produce this variation.

I'd want to focus your attention to the forms of the pixels regardless of the theme. Pixels and square shapes come to mind for most of us. However, this isn't exactly accurate; pixels on screens intended for 720p and 1080i video formats are arranged in square forms. The pixels on screens with resolutions other than this are rectangular.

The forms of the pixels on the screens are significant in the display component of pixel art, which we will discuss momentarily. The aspect ratio of the images displayed on the screen is affected by these pixels.

The Beginning of Pixel Art

The experience of pixel art begins shortly after Computer Art begins. Pixel art computers date back to a time when mechanization had a significant impact on human lives.

Computer art, which dates back to the 1960s, has evolved into an evolutionary structure as a result of digitalization and continues to evolve to this day. Desmond Paul Henry, an Englishman, was one of the first to experiment with computer art, building three sketching machines out of the bomb vision analog computers used by bombers during WWII.

The elaborate paintings of abstract, curved images that accompany Microsoft's Windows Media Player are reminiscent of Henry's machine-generated effects. Henry's Drawing Machine's drawings established a new branch of art and are the earliest examples of it. In addition, in 1962, these drawings were shown at the Reid Gallery in London.

We emphasized that Computer Art is, by its very nature, in a state of perpetual evolution. Types of art began to appear on digital machines rather than raw machinery by the end of the 1970s. Graphics shown on a computer screen was the term for art back then.

David Brandin, the head of the ACM (Association for Computing Machinery), to which our community belongs, was elected in the early 1980s. During his administration, he introduced people to computers and instilled in them a passion of technology.

Adele Goldberg and Robert Flegal defined 'Pixel Art' for the first time in a published piece. The term “pixel art” was first established in an essay published in the ACM on December 1, 1982 by Adele Goldberg and Robert Flegal.

Adobe was founded in 1982 and developed the Postscript language and digital typefaces, popularizing drawing, painting, and image processing software in the 1980s. Adobe Illustrator and Adobe Photoshop were introduced in the years that followed. Adobe Flash was released in 1996 as a popular collection of multimedia software for adding animation and interaction to web sites. Pixel artists became more popular in video games and music videos in the 2000s. For example, in 2006, Röyksopp released “Remind Me,” a work regarded as “surprising and mesmerizing” by the New York Times and fully illustrated with pixel art.

Pixel Art, like other schools of art, has created its own techniques, application systems, and mass, as well as the outstanding factors of its time, in this adventure that lasted until our recent history about 60 years ago and still continues. Pixel art was originally described as the way it was displayed on a computer screen, but it is now referred to as pixel art when offered as a product. During the intervening years, some works were created using software and fonts created by various corporations, allowing art to advance alongside technology.

Artificial intelligence algorithms are the result of the evolution that has occurred over time. Artificial intelligence is utilized to support the majority of the approaches used in drawing apps. For example, providing relevant color samples in the color palette for a painting in progress demonstrates that these algorithms are at work.

Another type of application is for the computer to immediately make an image object. Despite its youth, it allows us to make a rather accurate prediction about how it will be used in the future.

Artificial intelligence and deep learning approaches are gradually showing up in this industry, thanks to today's technologies. Various versions of this art will soon be seen on most platforms, particularly VR and AR. If we want to make one more prediction, it is self-evident that it will serve the concept of the'metaverse' (multiverse), which has become popular in films and novels.

Since ancient times, people have wondered about the uncertainty in the consequences of events. From the rolling of a dice, to the rotation of a card between two consoles, the concept of chance has developed. Randomness, as a definition, is not well founded, but we can simply call it the unpredictable state of a pile of events. For example, when a dice is rolled, its outcome cannot be predicted; the probability of coming double is 2 times more than coming 1.

In this article, we will examine the uncertainty starting from the recent past. Finally, we will consider the case of estimation with some regression models. Before we get started, let's examine our article in short headlines:

**Summary

**

**What is Randomness?

**

**Is it Predictable?

**

**Application

**

Result

Summary

Randomness is not an outcome in itself. The result of the heads and tails is chaotic, not random. (See: Chaos Theory) It is an indication that we do not have instruments that can measure its randomness. The center of gravity of a metal coin is not clear due to the patterns on it. This money affects the result of situations such as being thrown from any side, shooting angle. The presence of too many variables on the result does not make it possible to estimate it completely in practice. I would like to underline that it is not impossible in theory.

What is Randomness?

The concept of randomness is a measure of uncertainty. It is not possible to catch randomness, except in the quantum state, which I will mention shortly. I mentioned the reason for this. It is practically impossible to find all the variables separately and calculate the result. To give an example, let's want to choose between objects with the same properties; we cannot seek pure randomness in this choice. Because we interpret within a stack of probabilities, more simply, we can predict the result with some accuracy.

You are familiar that in the microuniverse we call the subatomic level, we cannot apply a number of laws. The reason is the uncertainty, that is, the quantum state. Radioactive materials are made up of atoms that decay over time. The atoms that decay break down into smaller atoms. Scientifically, the probability of the atom to decay in a certain time interval can be calculated, but it is not possible to predict which atom is the next one to decay. “God doesn't roll dice!” he said. In reference to this statement, he mentioned that although the methods used are of great benefit in theory, they are not a good option to bring him closer to his secret. Said god is a philosophical god defined by Einstein himself.

Is it Predictable?

If you have managed to come up to this section, you can interpret the answer to the question yourself. Except for the quantum state, which we call a special case, predictions can be made about the distribution of results within the frame of probability. For the predictable situation, I would like to point out that it can be predicted in a very unrealistic way when the variables in the chaotic situation I mentioned earlier are ignored.

We gave our examples physically, but can random numbers be generated in computer environment? The short answer is that random generation cannot be made. It can be determined as a result of complex algorithms. I want to emphasize that it has been determined; The algorithms used work deterministically. Whatever the output is, it is certain.

Application

We will construct the pseudo-randomness case with the Python language and discuss its predictability. Since this part is for the fancier, you can skip to the conclusion part. Let's get to know the data set:

We have 5 'independent' variables: gender, age, number of monthly entries to the application, the number of monthly purchases, the averages of these purchases, and finally the conclusion part that we expect to stop applying. However, in our experiment; We have a total of 11 'independent' variables to make the entry, number of purchases and the average 3 each. The value we predict will leave is the dependent variable.

https://gist.github.com/AlicanAKCA/e263aec434651d96d939918304757d72

Result

Since we entered the values of 100,1000 and 5000, we obtained 3 result graphs. In the case of low data numbers, the models are more predictable, while the higher this number, the lower the performance rate. First of all, we can make this comment. Next, we will focus on the fact that when the data increase, the rates of correct estimation decrease and the results of the regression models are close to each other. Why is it so low? Although it is low, how and why does it show results close to 50%?

In fact, we can interpret such low rates as the fact that there is no link between the data. Likewise, we can relate its convergence to 50% with the probability distributions I mentioned earlier. You can click here for the article on this topic.

For example, if the random variable X is an experiment where a coin is tossed into the air and the coin lands, there are two possible outcomes: 0 if the tails come in, and the 1st state space (0.1) if the tails come, the probability of the event is 0.5. happens. Hence the probability mass function:

It is expressed as. So far, we have explained the basis of probability theory through an example. Now, let's graph and visualize the event that I told through a new example with software. We Will See The Theory!

In our example this time there will be a dice. We will look at the average of the numbers on this dice and to what number this average converges to. Now, let's take a dice and roll the dice 100,000 times, let's write the numbers on a piece of paper… Joke, we can start by adding libraries.

We have created x and y lists together with our variables. We also determined the axis names of the plt chart. We created the X and Y lists to keep track of the number of dice rolled and the number of dice beats.

I would like to start the story with our 'number' variable While loop; We picked 100 times numbers from [1,6], and added it to the sum variable. Then our variable number of pulses was 0, we increased this variable by 100 so that we can show it in our graph. Then we divide the sum by the number of shots to our average variable and assign its mean. Finally, add the mean and number of pulses to the x and y lists.

We have repeated this about 1000 times. Then we printed it using the codes ply.plot and plot.show. We printed our lists to the console with the print commands in the upper lines. You can examine; really attracts attention.

Now we are where I am most excited; I think it's getting closer to the center of gravity! Another detail is that this point is the arithmetic mean of the numbers on the dice:

As a result, we saw that it converged to our experimental possibility. In terms of practicality, you can apply our first example of coin dumping yourself.

https://gist.github.com/AlicanAKCA/361e8a7607ff5bc6257e3960b0e76598

Visualizing the currency exchange dataset loaded from Api by classifying and sorting it with Python. I used different color for each year on the chart. There is about 15 years of data. It is 4082 days. Although it does not include machine learning algorithms, I have written my own data classification and visualization algorithms.

https://gist.github.com/AlicanAKCA/845cc785a127eb1f9b141c714ad1c2a9

I used Decision Tree, Random Forest, Linear, K-Neighbors and XGBoost

Regressions with Python. I chose the best model with the Cross-Validation method. I used the R-Squared method to check that it gives a consistent result. As a result predicted with nearly 99% success rate. There is no 'Overfitting' problem.

I used the 10-Layer Cross-Validation method. The results are below:

It is XGBoost that works best in these datas. I used this regression model for prediction.

https://gist.github.com/AlicanAKCA/8a906d0212fa335ded4b05ac4734db6c