What is a Computer? Computer Definition

The concept of a computer generally refers to a desktop computer or laptop, however, many electronic devices use computers to operate like telephones mobiles or smartphones, tablets, or iPads. In general, a wide range of entertainment products such as televisions, video game consoles, music, and video players, household appliances, cars, etc.

What is a computer

What is a computer?

It is a programmable machine that executes a series of commands to process the input data, obtaining convenient information that is subsequently sent to the output units. A computer is physically made of many integrated circuits and various support, extension, and accessory components. These circuits can jointly perform various tasks extremely fast and under the control of a program. These programs are called software.

See also:

The computer is made of two necessary parts. One is the hardware, which is the physical structure e.g. electronic circuits, cables, cabinet, keyboard, etc., while the other is software, which are nonphysical parts e.g. programs, data, information, documentation, etc.

The hardware is a physical device that is designed to execute specific functions which are interconnected to form an integrated system, this system of prescribed and programmed functions of each of the components is the software.

Software is a series of programs designed to make parts of the computer work in an integrated way. The pc or personal computer is designed to be operated by an individual person who can take advantage of the use of various support programs designed to develop specific tasks such as office work, whether calculations, development of text documents, presentations with graphics, communication via email and social networks, etc.

Currently, there are multiple programs designed to execute specific actions in various areas of the industry, which are operated by people on computers that can be personal (used by a single person) or supercomputers or high-performance computers (operated by multiple people), which are a set of very powerful computers linked together to increase the power of work and achieve the greater performance of very specific functions and tasks.

A desktop computer is a personal computer that is designed to be used on a base such as a desk primarily or work table and is characterized by having multiple independent components connected. To this type of computer, you can add multiple components and extra accessories that can be connected directly to the motherboard.

A laptop is a compact personal computer designed in one piece and of low weight that can be easily transported and is currently the most popular type of personal computer for its practicality and in some models for its low cost.

How does a computer work?

A computer is a machine that has at least one central processing unit (CPU), the main memory, and some peripherals or input and output devices. The input devices allow to feed data, the CPU is responsible for its processing (arithmetic-logical operations), and the output devices communicate them to the external media. Thus, the computer receives data, processes it, and shows results in the form of information. It can then be interpreted, stored, transmitted to another machine or device, or simply printed; all at the decision of an operator or user and under the control of a computer program.

In fact, computers perform a wide variety of tasks, which makes them general-purpose machines. It is opposed to a calculator because it only can do to calculate limitedly. Thus, based on input data, it can perform operations and problem-solve in the most assorted areas of human effort e.g. administrative, scientific, design, engineering, health, communications, etc., including more issues that would not be directly solvable or possible without your intervention.

Basically, the capacity of a computer depends on its hardware components, while the diversity of tasks lies mostly in the software that it supports running and contains installed.

Types of computer:

The computer can be of two types.

  • Analog computer, which is used for few and very specific purposes.
  • The digital computer, the most extensively used is known as the digital computer (general-purpose computer).

So that in general terms, when you speak of “the computer” you are referring to a digital computer. There is mixed architecture, called hybrid computers, these are also special purposes.

In World War-II, mechanical analog computers were used, oriented to military applications. At the same time, the first digital computer was developed, which was called ENIAC. It occupied a huge space and consumed large amounts of energy, equivalent to the consumption of hundreds of current computers (PC).

Modern computers are based on integrated circuits, billions of times faster than the first machines, and occupy a small part of your space.

Simple computers are small enough to exist on mobile devices. Laptops, tablets, netbooks, notebooks, ultrabooks, can be powered by small batteries. Personal computers in their various types are signs of the so-called information age and are what most people think of as a “computer.” However, embedded systems also constitute computers, and are found in many current devices, such as fighter jets, smartphones, industrial robots, MP4 players, and toys, etc.

History of Computer:

Far from being the computer is the evolutionary result of ideas of many people related to areas such as electronics, mechanics, semiconductor materials, logic, algebra, and programming.

  • 2700 BCC: the abacus is used in ancient civilizations such as Chinese or Sumerian, as the first tool to perform addition and subtraction.
  • Around 830: the Persian mathematician and engineer Musa al-Khuarismi developed the theory of algorithm, that is, the methodical solution of problems in algebra, and numerical calculus using a well-defined, ordered, and finite list of operations.
  • 1614: Scotsman John Napier invents the natural logarithm, which managed to simplify the calculation of multiplication and division by reducing it to a calculation with addition and subtraction.
  • 1620: the Englishman Edmund Gunter invents the slide rule, a manual instrument used since then until the appearance of the electronic calculator to perform arithmetic operations.
  • 1623: the German Wilhelm Schickard invents the first calculating machine, the prototype of which disappeared shortly after.
  • 1642: the French scientist and philosopher Blaise Pascal invents the adding machine (the pascaline), which used toothed wheels, and of which some original copies are still preserved.
  • 1671: the German philosopher and mathematician Gottfried Wilhelm Leibniz invents the machine capable of multiplying and dividing.
  • 1801: Frenchman Joseph Jacquard invents for his brocade weaving machine a punched card that controls the pattern of operation of the machine, an idea that would later be used by the first computers.
  • 1833: British mathematician and inventor Charles Babbage designs and attempts to build the first mechanically operated computer, which he called the “analytical engine.” However, the technology of his time was not advanced enough to make his idea a reality.
  • 1841: the mathematician Ada Lovelace begins to work with Babbage in what would be the first algorithm destined to be processed by a machine, which is why she is considered the first computer programmer.
  • 1890: the American Herman Hollerith invents the tabulating machine, taking advantage of some of Babbage’s ideas, which was used to prepare the United States census. Hollerith later founded the company that would later become IBM.
  • 1893: Swiss scientist Otto Steiger develops the first automatic calculator to be manufactured and used on an industrial scale, known as the Millionaire.
  • 1936: the English mathematician and computer scientist Alan Turing formalizes the concepts of algorithm and Turing machine, which would be key in the development of modern computing.
  • 1938: The German engineer Konrad Zuse completes the Z1, the first computer that can be considered as such. Electromechanical in operation, and using relays, it was programmable (using perforated tape) and used the binary system and Boolean logic. It would be followed by the improved Z2, Z3, and Z4 models.
  • 1944: In the United States, the IBM company builds the Harvard Mark I electromechanical computer, designed by a team headed by Howard H. Aiken. It was the first computer created in the United States.
  • 1944: In England, the Colossus computers (Colossus Mark I and Colossus Mark 2) are built, with the aim of deciphering the communications of the Germans during World War II.
  • 1946: At the University of Pennsylvania the ENIAC (Electronic Numerical Integrator And Calculator) is put into operation, which worked with valves and was the first general-purpose electronic computer.
  • 1947: at Bell Labs, John Bardeen, Walter Houser Brattain, and William Shockley invent the transistor.
  • 1951: EDVAC begins to operate, conceived by John von Neumann, which unlike ENIAC was not decimal, but binary, and had the first program designed to be stored.
  • 1953: IBM manufactures its 1st industrial-scale computer, the IBM 650. The use of assembly language for computer programming is expanded. Transistor computers replace tube computers, ushering in the second generation of computers.
  • 1957: Jack S. Kilby builds the first integrated circuit.
  • 1964: The appearance of the IBM 360 marks the beginning of the third generation of computers, in which printed circuit boards with multiple elementary components are replaced with integrated circuit boards.
  • 1965: Olivetti launches Programma 101, the first desktop computer.
  • 1971: Nicolette Instruments Corp. launches the Nicolette 1080, a scientific-use computer based on 20-bit registers.
  • 1971: Intel introduces the first commercial microprocessor, the first chip: the Intel 4004 microprocessor.
  • 1976: Steve Jobs, Steve Wozniak, and Mike Markkula found Apple.
  • 1977: Apple introduces the first personal computer to be sold on a large scale, the Apple II, developed by Steve Jobs and Steve Wozniak.
  • 1981: The IBM PC is launched on the market, which would become a commercial success, mark a revolution in the field of personal computing, and define new standards.
  • 1982: Microsoft presents its MS-DOS operating system, commissioned by IBM.
  • 1983: ARPANET is separated from the military network that originated it, passing to civilian use and thus becoming the origin of the Internet.
  • 1983: Richard Stallman publicly announces the GNU project.
  • 1985: Microsoft launches the Windows 1.0 operating system.
  • 1990: Tim Berners-Lee devises hypertext to create the World Wide Web (www), a new way of interacting with the Internet.
  • 1991: Linus Torvalds began developing Linux, a Unix-compatible operating system.
  • 2000: pocket computers appear at the beginning of the 21st century, the first PDAs.
  • 2007: Presentation of the first iPhone, by the Apple Company, a Smartphone.

Components of the computer:

The basic model of von Neumann architecture on which all modern computers are based, the technologies used in digital computers have evolved greatly since the appearance of the first models in the 1940s. Although Neumann Architecture, published by John von Neumann in the early 1940s, was attributed to John Presper Eckert and John William Mauchly.

The Von Neumann architecture describes a computer with four main sections: the arithmetic logic unit, the control unit, the primary, main, or central memory, and the input and output (I/O) devices. These parts are interconnected by conductor channels called buses.

Central processing unit:

The Central Processing Unit (CPU) basically consists of the following three elements:

The Arithmetic-Logic Unit (ALU) is the device designed and built to carry out elementary operations such as arithmetic operations (addition, subtraction), logical operations (AND, OR, NOT), and comparison of relational operations. This unit is where all the computational work is done.

The Control Unit (CU) follows the address of the memory positions that contain the instruction that the computer is going to carry out at that moment. It recovers the information by putting it in the ALU for the operation to be carried out. Then convey the results to appropriate locations in memory. Once this occurs, the control unit goes to the next instruction (usually located in the next position, unless the instruction is a jump instruction, informing the computer that the next instruction will be located in another memory position).

Registers: data, memory, constant registers, floating-point, general-purpose, specific purpose.

The processors can consist of in addition to those previously mentioned, other additional units such as the floating-point unit.

Primary memory

Main memory, known as Random-Access Memory (RAM), is a set of storage cells organized in such a way that they can be directly accessed numerically (memory address). Each elementary cell corresponds to a bit or minimum unit of information. It is accessed by 8-bit sequences, which make up one byte. An instruction is a certain operational action, a command that indicates to the ALU the operation to be carried out (addition, subtraction, logical operations, etc.). The bytes of the main memory store both the data and the command codes that are needed to carry out the instructions. The memory capacity is given by the number of cells it contains, measured in bytes (and their multiples). The technologies used to manufacture the memories have changed a lot; from the electromechanical relays of the first computers, tubes with mercury in which acoustic pulses were formed, arrays of permanent magnets, and individual transistors to today’s integrated circuits with millions of cells on a single chip. They are subdivided into static memories (SRAM) with six integrated transistors per bit and the much more widely used dynamic memory (DRAM), with one transistor and one capacitor integrated per bit. RAM can be rewritten several million times; unlike ROM, which can only be recorded once.

Input, output, or input/output peripherals:

See also: Peripheral (computing), Peripheral input, and Peripheral input/output.

The input devices allow the entry of data and information, while the output devices are in charge of externalizing the information processed by the computer. There are peripherals that are both input and output. As an example, a typical input device is a keyboard, an output device is a monitor, and an input/output device is a hard disk. There is a very wide range of I / O devices, such as a keyboard, monitor, printer, mouse, floppy drive, webcam, etc.

Buses:

The three basic units in a computer, the CPU, memory, and I/O subsystem are communicated with each other by communication buses or channels:

Address bus, to select the address of the data or peripheral you want to access.

Control bus, mostly to choose the process to be carried out on the data (mainly reading, writing, or modification).

The data bus, where the data circulates.

Other data and concepts:

In modern computers, a user has the impression that computers can run several programs “at the same time”, this is known as multitasking. Actually, the CPU executes instructions in one program and then after a short time, switches execution to a second program and executes some of its instructions. Since this process is very fast, it creates the illusion that several programs are running simultaneously; you are actually dividing the CPU time among the programs, one at a time. The operating system is the one that controls the distribution of time. Really simultaneous processing is done on computers that have more than one CPU, giving rise to multiprocessing.

The operating system is the program that manages and administers all computer resources, controls, for example, which programs are executed and when manages memory and access to I / O devices, provides interfaces between devices, even between the computer and user.

Currently, some widely used programs are usually included in the distributions of the operating system; such as Internet browsers, word processors, e-mail programs, network interfaces, movie players, and other programs that previously had to be obtained and installed separately.

The first large and expensive digital computers were used primarily for scientific calculations. ENIAC was created for the purpose of solving the ballistics problems of the United States Army. The CSIRAC, the first Australian computer, made it possible to assess rainfall patterns for a large hydroelectric generation project.

With the commercial manufacture of computers, governments and companies systematized many of their data collection and processing tasks, which were previously performed manually. In academia, scientists from all fields began to use computers to do their analyzes and calculations. The continuous decrease in the prices of computers allowed their use by smaller and smaller companies. Businesses, organizations, and governments began to use large numbers of small computers to perform tasks that were previously done by large and expensive mainframe computers.

With the invention of the microprocessor in 1970, it became possible to make ever-cheaper computers. The microcomputer was born and then the PC appeared, the latter became popular for carrying out routine tasks such as writing and printing documents, calculating probabilities, performing analysis and calculation with spreadsheets, and communicating via email, and the Internet. The wide availability of computers and their easy adaptation to the needs of each person have made them use them for a variety of tasks, which include the most diverse fields of application.

At the same time, small fixed-programming computers (embedded systems) began to make their way into applications for the home, automobiles, airplanes, and industrial machinery. These integrated processors controlled the behavior of the devices more easily, allowing the development of more complex control functions, such as anti-lock braking systems (ABS). At the beginning of the 21st century, most electrical appliances, almost all types of electrical transportation, and most factory production lines were powered by a computer.

Towards the end of the 20th century and the beginning of the 21st, personal computers are used for both research and entertainment (video games), but large computers are used for complex mathematical calculations, technology, modeling, astronomy, medicine, etc.

Perhaps the most interesting “descendant” of the cross between the concept of the PC or personal computer and the so-called supercomputers is the Workstation or workstation. This term, originally used for equipment and machines for recording, recording, and digital sound treatment, now refers to workstations, which are high-capacity computing systems, normally dedicated to scientific calculation tasks or real-time processes. A Workstation is, in essence, a personal work computer with high computation, performance, and storage capacity, superior to conventional personal computers.

Similar Posts