Computer parameters affecting operating speed. Intel microprocessor architecture and the main factors affecting its performance. What are we going to do

In modern conditions, the growth of company profits is the main necessary trend in the development of enterprises. Profit growth can be achieved in various ways, among which we can highlight the more efficient use of company personnel.

The indicator for measuring the performance of a company's workforce is productivity.

General overview

Labor productivity according to the calculation formula is a criterion by which one can characterize the productivity of labor use.

Labor productivity refers to the efficiency that labor has in the production process. It can be measured by a certain period of time required to produce a unit of output.

Based on the definition contained in the encyclopedic dictionary of F. A. Brockhaus and I. A. Efron, the productivity or productivity of labor is supposed to be considered the relationship formed between the volume of labor expended and the result that can be obtained during the implementation of labor.

By L. E. Basovsky, labor productivity can be defined as the productivity of the personnel that the enterprise has. It can be determined by the quantity of products produced per unit of working time. This indicator is also determined by labor costs, which can be attributed to a unit of output.

Productivity is the amount of output produced by one employee in a specified period of time.

It is a criterion that characterizes the productivity of a certain living labor and the effectiveness of production work according to the formation of a product per unit of labor time spent on their production.

Operational efficiency increases based on technological progress, through the introduction of new technologies, increasing the qualifications of employees and their financial interest.

Analysis stages

Labor productivity assessment consists of the following main stages:

  • analysis of absolute indicators over several years;
  • determining the impact of certain factor indicators on productivity dynamics;
  • determination of reserves for productivity gains.

Basic indicators

The main important performance indicators that are analyzed in modern enterprises operating in market conditions may be such as the need for full employment of personnel and high output.

Product output is the value of productivity per unit of labor input. It can be determined by correlating the number of products produced or services provided that were produced in a certain unit of time.

Labor intensity is the ratio between working time costs and production volume, which characterizes labor costs per unit of product or service.

Calculation methods

To measure work productivity, three methods of calculating productivity are used:

  • natural method. It is used in organizations that produce homogeneous products. This method takes into account the calculation of work productivity as the correspondence between the volume of products made in natural terms and the average number of employees;
  • the labor method is used if work areas produce a huge amount of product with a frequently changing assortment; formation is determined in standard hours (amount of work multiplied by standard time), and the results are summarized according to different types of product;
  • cost method. It is used in organizations that produce heterogeneous products. This method takes into account the calculation of work productivity as the correspondence between the volume of products made in cost terms and the average number of employees.

In order to assess the level of job performance, the concept of personal, additional and general characteristics is used.

Private properties are those time costs that are required to produce a unit of product in natural terms for a single person-day or person-hour. Auxiliary properties take into account the time spent on performing a unit of a certain type of work or the amount of work performed per unit of period.

Calculation method

Among the possible options for labor productivity, the following indicators can be distinguished: output, which can be the average annual, average daily and average hourly for one employee. There is a direct relationship between these characteristics: the number of working days and the length of the working day can predetermine the value of the average hourly output, which, in turn, predetermines the value of the employee’s average annual output.

Labor productivity according to the calculation formula is as follows:

VG = KR * PRD * VSC

where VG is the worker’s average annual output, t.r.;

KR - number of working days, days;

VCH - average hourly output, t.r. per person;

LWP - duration of work shift (day), hour.

The level of impact of these conditions can be determined by applying the method of chain substitution of indicators, the method of absolute differences, the method of relative differences, as well as the integral method.

Having information about the level of impact of different conditions on the indicator under study, it is possible to establish the level of their impact on production volume. To do this, the value describing the impact of any of the conditions is multiplied by the number of employees of the company at the average value.

Main Factors

Further research into work productivity is focused on detailing the impact of different conditions on worker output (average annual output). Conditions are divided into two categories: extensive and intensive. Factors that have a great influence on the use of working time are considered extensive; factors that have a great influence on hourly work efficiency are considered intensive.

The analysis of extensive factors is focused on identifying the costs of labor time from its non-productive use. Labor time costs are determined by comparing the planned and practical labor time fund. The results of the impact of costs on the production of a product are determined by multiplying their number of days or hours by the average hourly (or average daily) production according to the plan per worker.

The analysis of intensive factors is focused on identifying conditions associated with changes in the labor intensity of a product. Reducing labor intensity is the main condition for increasing productivity. Feedback is also observed.

Factor analysis

Let's consider the basic formulas for the productivity of production factors.

To consider influencing factors, we use methods and principles of calculations generally recognized in economic science.

The labor productivity formula is presented below.

where W is labor productivity, t.r. per person;

Q is the volume of products that were produced in value terms, t.r.;

T - number of personnel, people.

Let us extract the Q value from this productivity formula:

Thus, the volume of production changes depending on changes in labor productivity and the number of personnel.

The dynamics of changes in production volume under the influence of changes in productivity indicators can be calculated using the formula:

ΔQ (W) = (W1-W0)*T1

The dynamics of changes in the quantity of products under the influence of changes in the number of employees will be calculated using the formula:

ΔQ (T) = (T1-T0)*W0

General effect of factors:

ΔQ (W) + Δ Q (T) = ΔQ (total)

Change due to the influence of factors can be calculated using the factor model of the productivity formula:

PT = UD * D * Tcm * CV

where PT is labor productivity, t.r. per person

Ud - the share of workers in the total number of personnel

D - days worked by one worker per year, days

Tsm - average working day, hour.

CV - average hourly labor productivity of a worker, t.r. per person

Basic reserves

Productivity research is carried out in order to establish reserves for its growth. Reserves for increase may include the following factors affecting labor productivity:

  • increasing the technological level of manufacturing, i.e. adding the latest scientific and technical processes, obtaining high-quality materials, mechanization and automation of manufacturing;
  • improving the company structure and selecting the most competent employees, eliminating employee turnover, increasing the qualifications of employees;
  • structural changes in production, which take into account the replacement of some individual types of product, an increase in the weight of a new product, a change in the labor intensity of the production program, etc.;
  • the formation and improvement of the necessary public infrastructure is a solution to the difficulties associated with meeting the needs of the company and labor societies.

Directions for improvement

The question of how to increase labor productivity is very relevant for many enterprises.

The essence of labor productivity growth at an enterprise is manifested in:

  • change in the quantity of production when using a unit of labor;
  • change in labor costs per established unit of production;
  • change in salary costs by 1 ruble;
  • reducing the share of labor costs in production costs;
  • improving the quality of goods and services;
  • reduction of production defects;
  • increasing the number of products;
  • increase in sales volume and profit.

In order to ensure high productivity of company employees, management needs to ensure normal working conditions. The level of human productivity, as well as the efficiency of his work, can be influenced by a huge number of factors, both intensive and extensive. Taking into account these factors affecting labor productivity is necessary when calculating the productivity indicator and reserves for its growth.

Data storage systems for the vast majority of web projects (and not only) play a key role. Indeed, often the task comes down not only to storing a certain type of content, but also to ensuring its return to visitors, as well as processing, which imposes certain performance requirements.

While the drive industry uses many other metrics to describe and guarantee proper performance, in the storage and disk drive market, it is common to use IOPS as a comparative metric for the purpose of “convenience” of comparison. However, the performance of storage systems, measured in IOPS (Input Output Operations per Second), input/output (write/read) operations, is influenced by a large number of factors.

In this article, I'd like to look at these factors to make the measure of performance expressed in IOPS more understandable.

Let's start with the fact that IOPS is not IOPS at all and not even IOPS at all, since there are many variables that determine how much IOPS we will get in some cases and in others. You should also consider that storage systems use read and write functions and provide different amounts of IOPS for these functions depending on the architecture and type of application, especially in cases where I/O operations occur at the same time. Different workloads have different input/output (I/O) requirements. Thus, storage systems that at first glance should provide adequate performance may, in fact, fail to cope with the task.

Drive Performance Basics

In order to gain a full understanding of the issue, let's start with the basics. IOPS, throughput (MB/s or MiB/s) and response time in milliseconds (ms) are common units of measurement for the performance of drives and storage arrays.

IOPS is usually thought of as a measurement of a storage device's ability to read/write 4-8KB blocks in random order. Which is typical for online transaction processing tasks, databases and for running various applications.

The concept of drive throughput is usually applicable when reading / writing a large file, for example, in blocks of 64 KB or more, sequentially (in 1 stream, 1 file).

Response time is the time it takes for the drive to begin a write/read operation.

The conversion between IOPS and throughput can be done as follows:

IOPS = throughput/block size;
Throughput = IOPS * block size,

Where block size is the amount of information transferred during one input/output (I/O) operation. Thus, knowing such a characteristic of a hard drive (HDD SATA) as bandwidth, we can easily calculate the number of IOPS.

For example, let's take the standard block size - 4KB and the standard throughput declared by the manufacturer for sequential writing or reading (I/O) - 121 MB / s. IOPS = 121 MB / 4 KB, as a result we get a value of about 30,000 IOPS for our SATA hard drive. If the block size is increased and made equal to 8 KB, the value will be about 15,000 IOPS, that is, it will decrease almost proportionally to the increase in the block size. However, it must be clearly understood that here we considered IOPS in the sequential write or read key.

Things change dramatically for traditional SATA hard drives if reads and writes are random. This is where latency begins to play a role, which is very critical in the case of HDDs (Hard Disk Drives) SATA / SAS, and sometimes even in the case of SSD (Solid State Drive) solid state drives. Although the latter often provide performance orders of magnitude better than that of “rotating” drives due to the absence of moving elements, significant recording delays may still occur due to the peculiarities of the technology, and, as a result, when using them in arrays. Dear amarao conducted a rather useful study on the use of solid-state drives in arrays, as it turned out, performance will depend on the latency of the slowest drive. You can read more about the results in his article: SSD + raid0 - not everything is so simple.

But let's return to the performance of individual drives. Let's consider the case with “rotating” drives. The time required to perform one random I/O operation will be determined by the following components:

T(I/O) = T(A)+T(L)+T(R/W),

Where T(A) is access time or seek time, also known as search time, that is, the time required for the read head to be placed on the track with the block of information we need. Often, the manufacturer specifies 3 parameters in the disk specification:

The time required to move from the farthest path to the closest;
- time required to move between adjacent tracks;
- average access time.

Thus we come to the magical conclusion that T(A) can be improved if we place our data on as close tracks as possible, and all data is located as far from the center of the platter as possible (less time is required to move the head block, and there is more data on the outer tracks, since the track is longer and rotates faster than the inner one). Now it becomes clear why defragmentation can be so useful. Especially with the condition of placing data on external tracks in the first place.

T(L) is the delay caused by the rotation of the disk, that is, the time required to read or write a specific sector on our track. It is easy to understand that it will lie in the range from 0 to 1/RPS, where RPS is the number of revolutions per second. For example, with a disk characteristic of 7200 RPM (revolutions per minute), we get 7200/60 = 120 revolutions per second. That is, one revolution occurs in (1/120) * 1000 (the number of milliseconds in a second) = 8.33 ms. The average delay in this case will be equal to half the time spent on one revolution - 8.33/2 = 4.16 ms.

T(R/W) - time to read or write a sector, which is determined by the size of the block selected during formatting (from 512 bytes to ... several megabytes, in the case of more capacious drives - from 4 kilobytes, standard cluster size) and the bandwidth, which indicated in the drive specifications.

The average rotation delay, which is approximately equal to the time spent on half a revolution, knowing the rotation speed of 7200, 10,000 or 15,000 RPM, is easy to determine. And we have already shown how above.

The remaining parameters (average read and write search time) are more difficult to determine; they are determined as a result of tests and are indicated by the manufacturer.

To calculate the number of random IOPs of a hard drive, it is possible to apply the following formula, provided that the number of simultaneous read and write operations is the same (50%/50%):

1/(((average read search time + average write search time) / 2) / 1000) + (average rotation delay / 1000)).

Many people are interested in why exactly this is the origin of the formula? IOPS is the number of input or output operations per second. That is why we divide 1 second in the numerator (1000 milliseconds) by the time, taking into account all the delays in the denominator (also expressed in seconds or milliseconds), required to complete one input or output operation.

That is, the formula can be written this way:

1000 (ms) / ((average read seek time (ms) + average write seek time (ms)) /2) + average rotation delay (ms))

For drives with different numbers of RPM (rotations per minute), we get the following values:

For a 7200 RPM drive IOPS = 1/(((8.5+9.5)/2)/1000) + (4.16/1000)) = 1/((9/1000) +
(4,16/1000)) = 1000/13,16 = 75,98;
For a 10K RPM SAS drive IOPS = 1/(((3.8+4.4)/2)/1000) + (2.98/1000)) =
1/((4,10/1000) + (2,98/1000)) = 1000/7,08 = 141,24;
For a 15K RPM SAS drive IOPS = 1/(((3.48+3.9)/2)/1000) + (2.00/1000)) =
1/((3,65/1000) + (2/1000)) = 1000/5,65 = 176,99.

Thus, we see dramatic changes when from tens of thousands of IOPS for sequential reading or writing, performance drops to several tens of IOPS.

And already, with a standard sector size of 4 KB, and the presence of such a small number of IOPS, we will get a throughput value of not a hundred megabytes, but less than a megabyte.

These examples also illustrate why there is little variation in rated disk IOPS from different manufacturers for drives with the same RPM.

Now it becomes clear why the performance data lies in fairly wide ranges:

7200 RPM (Rotate per Minute) HDD SATA - 50-75 IOPS;
10K RPM HDD SAS - 110-140 IOPS;
15K RPM HDD SAS - 150-200 IOPS;
SSD (Solid State Drive) - tens of thousands of IOPS for reading, hundreds and thousands for writing.

However, the nominal disk IOPS is still far from accurate, since it does not take into account differences in the nature of the loads in individual cases, which is very important to understand.

Also, for a better understanding of the topic, I recommend reading another useful article from amarao: How to correctly measure disk performance, thanks to which it also becomes clear that latency is not at all fixed and also depends on the load and its nature.

The only thing I would like to add:

When calculating hard disk performance, we can neglect the reduction in the number of IOPS as the block size increases, why?

We have already understood that for “rotating” drives, the time required for a random read or write consists of the following components:

T(I/O) = T(A)+T(L)+T(R/W).

And then we even calculated the performance for random reading and writing in IOPS. It’s just that we essentially neglected the T(R/W) parameter there, and this is not accidental. We know that let's say sequential reads can be achieved at 120 megabytes per second. It becomes clear that a 4KB block will be read in approximately 0.03 ms, a time two orders of magnitude shorter than the time of other delays (8 ms + 4 ms).

Thus, if with a block size of 4KB we have 76 IOPS(the main delay was caused by the rotation of the drive and the head positioning time, and not by the reading or writing process itself), then with a block size of 64 KB, the drop in IOPS will not be 16 times, as with sequential reading, but only by several IOPS. Since the time spent directly reading or writing will increase by 0.45 ms, which is only about 4% of the total latency.

As a result, we get 76-4% = 72.96 IOPS, which, you see, is not at all critical in the calculations, since the drop in IOPS is not 16 times, but only by a few percent! And when calculating system performance, it is much more important not to forget to take into account other important parameters.

Magic conclusion: When calculating the performance of storage systems based on hard drives, we should select the optimal block (cluster) size to ensure the maximum throughput we need, depending on the type of data and applications used, with IOPS dropping as the block size increases from 4KB to 64KB or even 128KB can be neglected or taken into account as 4 and 7%, respectively, if they play an important role in the task at hand.

It also becomes clear why it doesn’t always make sense to use very large blocks. For example, when streaming video, a two-megabyte block size may not be the most optimal option. Since the drop in the number of IOPS will be more than 2 times. Among other things, other degradation processes in arrays will be added, associated with multithreading and computational load when distributing data across the array.

Optimal block (cluster) size

The optimal block size needs to be considered depending on the nature of the load and the type of applications used. If you are working with small data, for example with databases, you should choose the standard 4 KB, but if you are talking about streaming video files, it is better to choose a cluster size of 64 KB or more.

It should be remembered that the block size is not as critical for SSDs as for standard HDDs, since it allows you to provide the required throughput due to a small number of random IOPS, the number of which decreases slightly as the block size increases, unlike SSDs, where there is an almost proportional dependence .

Why 4 KB standard?

For many drives, especially solid-state drives, performance values, for example writes, starting from 4 KB, become optimal, as can be seen from the graph:

While for reading, the speed is also quite significant and more or less bearable starting from 4 KB:

It is for this reason that a 4 KB block size is very often used as a standard one, since with a smaller size there are large performance losses, and with an increase in the block size, in the case of working with small data, the data will be distributed less efficiently, occupying the entire block size and storage quota will not be used effectively.

RAID level

If your storage system is an array of drives combined into a RAID of a certain level, then system performance will depend to a large extent on what RAID level was applied and what percentage of the total number of operations are write operations, because it is writes that cause performance degradation In most cases.

So, with RAID0, only 1 IOPS will be consumed for each input operation, because the data will be distributed across all drives without duplication. In the case of a mirror (RAID1, RAID10), each write operation will already consume 2 IOPS, since the information must be written to 2 drives.

At higher RAID levels, the losses are even more significant; for example, in RAID5 the penalty factor will be 4, which is due to the way the data is distributed across the disks.

RAID5 is used instead of RAID4 in most cases because it distributes parity (checksums) across all disks. In a RAID4 array, one drive is responsible for all the parity while the data is spread across more than 3 drives. This is why we apply a penalty factor of 4 in a RAID5 array, because we read data, read parity, then write data and write parity.

In a RAID6 array, everything is similar, except that instead of calculating parity once, we do it twice and thus have 3 reads and 3 writes, which gives us a penalty factor of 6.

It would seem that in an array such as RAID-DP everything would be similar, since it is essentially a modified RAID6 array. But that was not the case... The trick is that a separate WAFL (Write Anywhere File Layout) file system is used, where all write operations are sequential and performed on free space. WAFL will basically write new data to a new location on disk and then move pointers to the new data, thus eliminating read operations that need to take place. In addition, a log is written to NVRAM, which tracks write transactions, initiates writes, and can restore them if necessary. They are written to the buffer at the beginning, and then they are “merged” onto the disk, which speeds up the process. Probably experts at NetApp can enlighten us in more detail in the comments on how savings are achieved, I have not yet fully understood this issue, but I remembered that the RAID penalty factor will be only 2, not 6. The “trick” is quite significant.

With large RAID-DP arrays that consist of dozens of drives, there is the concept of reducing the "parity penalty" that occurs when parity writes occur. So, as the RAID-DP array grows, a smaller number of disks allocated for parity is required, which will lead to a reduction in losses associated with parity records. However, in small arrays, or in order to increase conservatism, we can neglect this phenomenon.

Now, knowing about the IOPS losses as a result of using one or another RAID level, we can calculate the performance of the array. However, please note that other factors, such as interface bandwidth, suboptimal interrupt distribution across processor cores, etc., RAID controller bandwidth, or exceeding the permissible queue depth, may have a negative impact.

If these factors are neglected, the formula will be as follows:

Functional IOPS = (Raw IOPS * % of writes / RAID penalty factor) + (Raw IOPS * % of read), where Raw IOPS = average IOPS of drives * number of drives.

For example, let's calculate the performance of a RAID10 array of 12 HDD SATA drives, if it is known that 10% of write operations and 90% of read operations occur simultaneously. Let's say that the disk provides 75 random IOPS, with a block size of 4KB.

Initial IOPS = 75*12 = 900;
Functional IOPS = (900*0.1/2) + (900*0.9) = 855.

Thus, we see that at low write intensity, which is mainly observed in systems designed for content delivery, the influence of the RAID penalty factor is minimal.

Application dependency

The performance of our solution can very much depend on the applications that will be executed subsequently. So it could be transaction processing - "structured" data that is organized, consistent and predictable. Often in these processes, you can apply the principle of batch processing, distributing these processes in time so that the load is minimal, thereby optimizing IOPS consumption. However, recently more and more media projects have appeared where the data is “unstructured” and requires completely different principles for processing it.

For this reason, calculating the required performance of a solution for a specific project can be a very difficult task. Some of the storage vendors and experts argue that IOPS do not matter, since customers overwhelmingly use up to 30-40 thousand IOPS, while modern storage systems provide hundreds of thousands and even millions of IOPS. That is, modern storage facilities satisfy the needs of 99% of clients. However, this statement may not always be true, only for the business segment that hosts storage locally, but not for projects hosted in data centers, which often, even when using ready-made storage solutions, should provide quite high performance and fault tolerance.

If the project is located in a data center, in most cases, it is still more economical to build storage systems on your own based on dedicated servers than to use ready-made solutions, since it becomes possible to more effectively distribute the load and select the optimal equipment for certain processes. Among other things, the performance indicators of ready-made storage systems are far from real, since they are mostly based on profile data from synthetic performance tests when using 4 or 8 KB block sizes, while Most client applications now run in environments with block sizes between 32 and 64 KB.

As we can see from the graph:

Less than 5% of storage systems are configured with a block size of less than 10 KB and less than 15% use blocks with a block size of less than 20 KB. In addition, even for a given application, it is rare that only one type of I/O consumption occurs. For example, a database will have different I/O profiles for different processes (data files, logging, indexes...). This means that the stated synthetic system performance tests may be far from the truth.

What about delays?

Even if we ignore the fact that the tools used to measure latency tend to measure average latency times and miss the fact that a single I/O in some process can take much longer than others, thus slowing down the progress of the whole process, they do not take into account at all what how much I/O latency will change depending on block size. Among other things, this time will also depend on the specific application.

Thus, we come to another magical conclusion: not only is block size not a very good characteristic when measuring the performance of IOPS systems, but latency can also turn out to be a completely useless parameter.

Well, if neither IOPS nor latency are a good measure of storage system performance, then what is?

Only a real test of application execution on a specific solution...

This test will be a real method that will certainly allow you to understand how productive the solution will be for your case. To do this, you will need to run a copy of the application on a separate storage and simulate the load for a certain period. This is the only way to obtain reliable data. And of course, you need to measure not storage metrics, but application metrics.

However, taking into account the above factors that affect the performance of our systems can be very useful when selecting storage or building a certain infrastructure based on dedicated servers. With a certain degree of conservatism, it becomes possible to select a more or less realistic solution, to eliminate some technical and software flaws in the form of non-optimal block size when partitioning or non-optimal work with disks. The solution, of course, will not 100% guarantee the calculated performance, but in 99% of cases it can be said that the solution will cope with the load, especially if you add conservatism depending on the type of application and its features into the calculation.

In any production, one of the main goals pursued by the company's management is to obtain results. The only question is how much effort and resources will be required in the process of work to achieve the main goal. To determine the efficiency of an enterprise, the concept of “labor productivity” was introduced, which is an indicator of staff productivity. The work that can be done by one person per unit of time is conventionally called “output”.

For each enterprise it is very important to obtain high results and at the same time spend as little resources as possible on production (this includes electricity bills, rent, etc.).

The most important task in any enterprise that manufactures goods or provides services is to increase productivity. At the same time, there are a number of measures that are usually followed to reduce the amount of costs required for the work process. Thus, during the period of enterprise development, labor productivity may change.

As a rule, several groups of factors are classified that can influence the change, namely the growth of production indicators. First of all, this is an economic and geographical factor, which includes the availability of available labor resources, water, electricity, building materials, as well as the distance to communications, terrain, etc. No less important is the importance of accelerating scientific and technical progress, promoting the introduction of new generations of modern technology and the use of advanced technologies and automated systems. It can also be assumed that labor productivity also depends on the factor of structural changes, which means a change in the share of components and purchased semi-finished products, as well as the structure of production and the share of individual types of products.

The social (human) aspect still remains of great importance, because it is the concern for social benefits that underlies the increase in labor productivity. This includes: concern about a person’s physical health, level of intellectual development, professionalism, etc.

Factors that increase labor productivity are the most important component of the entire work process, because they influence the rate of development of any enterprise and, accordingly, contribute to an increase in profits.

It is also worth noting the organizational point that determines the level of production and labor management. This includes improving the organization of enterprise management, improving personnel, material and technical training.

When talking about productivity, it is impossible to ignore labor intensity. This concept is a reflection of the amount of mental and physical energy expended by an employee during a certain period of working time.

It is very important to determine the optimal intensity for a given work process, because excessive activity can lead to inevitable losses in productivity. As a rule, this occurs as a result of human overwork, occupational diseases, injuries, etc.

It is worth noting that the main indicators that determine the intensity of labor have been identified. First of all, this is a person’s workload. This allows you to determine the intensity of the work process and, accordingly, the feasibility of costs. At the same time, it is customary to calculate the pace of work, that is, the frequency of actions relative to a unit of time. Taking into account these factors, the enterprise, as a rule, has certain standards, based on the indicators of which the production work plan is established.

Factors of labor productivity are the subject of close attention of scientists and practitioners, since they act as the root cause that determines its level and dynamics. The factors studied in the analysis can be classified according to different criteria. We present the most detailed classification in Table 1

Table 1

Classification of factors affecting labor productivity

Classification feature

Groups of factors

By it's nature

Natural and climatic

Socio-economic

Production and economic

By degree of impact on the result

Basic

Minor

In relation to the object of study

Domestic

Depending on the team

Objective

Subjective

By prevalence

Specific

By duration

Permanent

Variables

By the nature of the action

Extensive

Intensive

According to the properties of the reflected phenomena

Quantitative

Quality

According to its composition

By level of subordination (hierarchy)

First order

Second order, etc.

Where possible, impact measurements

Measurable

Unmeasurable

By their nature, factors are divided into natural-climatic, socio-economic and production-economic.

Natural and climatic factors have a great influence on the results of activities in agriculture, the mining industry, forestry and other industries. Taking into account their influence allows us to more accurately assess the results of the work of business entities. Socio-economic factors include the living conditions of workers, the organization of cultural, sports and recreational work at the enterprise, the general level of culture and education of personnel, etc. They contribute to a more complete use of the enterprise’s production resources and increase the efficiency of its work. Production and economic factors determine the completeness and efficiency of use of the enterprise's production resources and the final results of its activities. Based on the degree of impact on the results of economic activity, factors are divided into major and minor. The main ones include factors that have a decisive impact on the performance indicator. Those that do not have a decisive impact on the results of economic activity in the current conditions are considered secondary. Here it is necessary to note that the same factor, depending on the circumstances, can be both primary and secondary. The ability to identify the main, determining factors from a variety of factors ensures the correctness of the conclusions based on the results of the analysis.

In relation to the object of study, factors are classified into internal and external, i.e. dependent and independent of the activities of this enterprise. The main attention in the analysis should be paid to the study of internal factors that the enterprise can influence.

At the same time, in many cases, with developed production connections and relationships, the results of each enterprise are significantly influenced by the activities of other enterprises, for example, the uniformity and timeliness of supplies of raw materials, materials, their quality, cost, market conditions, inflationary processes, etc. These factors are external. They do not characterize the efforts of a given team, but their study makes it possible to more accurately determine the degree of influence of internal causes and thereby more fully identify the internal reserves of production.

To correctly assess the activities of enterprises, factors must be further divided into objective and subjective. Objective factors, such as a natural disaster, do not depend on the will and desire of people. Unlike objective reasons, subjective reasons depend on the activities of legal entities and individuals.

According to the degree of prevalence, factors are divided into general and specific. General factors include factors that operate in all sectors of the economy. Specific are those that operate in a particular sector of the economy or enterprise. This division of factors allows us to more fully take into account the characteristics of individual enterprises and industries and more accurately assess their activities.

Based on the duration of impact on performance results, factors are distinguished between constant and variable. Constant factors influence the phenomenon under study continuously throughout time. The impact of variable factors manifests itself periodically, for example, the development of new technology, new types of products, new production technology, etc.

Of great importance for assessing the activities of enterprises is the division of factors according to the nature of their action into intensive and extensive. Extensive factors include factors that are associated with a quantitative rather than a qualitative increase in the performance indicator, for example, an increase in the volume of production by expanding the sown area, increasing the number of animals, the number of workers, etc. Intensive factors characterize the degree of effort and labor intensity in the production process, for example, increasing agricultural yields, livestock productivity, and the level of labor productivity.

If the analysis aims to measure the influence of each factor on the results of economic activity, then they are divided into quantitative and qualitative, simple and complex, measurable and unmeasurable.

Factors that express the quantitative certainty of phenomena (number of workers, equipment, raw materials, etc.) are considered quantitative. Qualitative factors determine the internal qualities, characteristics and characteristics of the objects being studied (labor productivity, product quality, soil fertility, etc.).

Most of the factors studied are complex in composition and consist of several elements. However, there are also those that cannot be broken down into their component parts. Depending on their composition, factors are divided into complex (complex) and simple (elemental). An example of a complex factor is labor productivity, and a simple one is the number of working days in the reporting period.

As already indicated, some factors have a direct impact on the performance indicator, while others have an indirect impact. Based on the level of subordination (hierarchy), factors of the first, second, third, etc. are distinguished. levels of subordination. The first level factors include those that directly affect the performance indicator. Factors that determine the performance indicator indirectly, using first-level factors, are called second-level factors, etc. For example, relative to gross output, the first-level factors are the average annual number of workers and the average annual output per worker. The number of days worked by one worker and the average daily output are second-level factors. Factors of the third level include the length of the working day and average hourly output.

The basis of running any business is the rational and efficient use of available resources, including labor. It is quite logical that management seeks to increase the volume of output without additional costs for hiring workers. Experts identify several factors that can improve performance:

    Managerial style (the main task of a manager is to motivate staff, create an organizational culture that values ​​activity and hard work).

    Investments in technical innovations (purchase of new equipment that meets the demands of time can significantly reduce the time spent by each employee).

    Trainings and seminars for advanced training (knowledge of the specifics of production allows personnel to participate in improving the production process).

Many users wonder what most affects computer performance?

It turns out that it is impossible to give a definite answer to this question. A computer is a set of subsystems (memory, computing, graphics, storage) that interact with each other through the motherboard and device drivers. If subsystems are not configured correctly, they do not provide the maximum performance they could.

Comprehensive performance is made up of software and hardware settings and features.
Let's list them.

Hardware Performance Factors:

  1. Number of processor cores – 1, 2, 3 or 4
  2. Processor frequency and processor system bus (FSB) frequency – 533, 667, 800, 1066, 1333 or 1600 MHz
  3. Volume and quantity of processor cache memory (CPU) – 256, 512 KB; 1, 2, 3, 4, 6, 12 MB.
  4. Matching the system bus frequency of the CPU and motherboard
  5. Random access memory (RAM) frequency and motherboard memory bus frequency – DDR2-667, 800, 1066
  6. RAM capacity – 512 MB or more
  7. Chipset used on the motherboard (Intel, VIA, SIS, nVidia, ATI/AMD)
  8. The graphics subsystem used is built into the motherboard or discrete (external video card with its own video memory and graphics processor)
  9. Hard drive (HDD) interface type – parallel IDE or serial SATA and SATA-2
  10. Hard drive cache – 8, 16 or 32 MB.

Increasing the listed technical characteristics always increases productivity.

Cores

At the moment, most manufactured processors have at least 2 cores (except AMD Sempron, Athlon 64 and Intel Celeron D, Celeron 4xx). The number of cores is important in 3D rendering or video encoding tasks, as well as in programs whose code is optimized for multi-threading of several cores. In other cases (for example, in office and Internet tasks) they are useless.

Four cores have Intel Core 2 Extreme and Core 2 Quad processors with the following markings: QX9xxx, Q9xxx, Q8xxx, QX6xxx;
AMD Phenom X3 – 3 cores;
AMD Phenom X4 – 4 cores.

We must remember that the number of cores significantly increases the power consumption of the CPU and increases the power requirements for the motherboard and power supply!

But the generation and architecture of the core greatly influence the performance of any processor.
For example, if we take dual-core Intel Pentium D and Core 2 Duo with the same frequency, system bus and cache memory, then Core 2 Duo will undoubtedly win.

Processor, memory and motherboard bus frequencies

It is also very important that the frequencies of the various components match.
Let's say, if your motherboard supports a memory bus frequency of 800 MHz, and a DDR2-677 memory module is installed, then the frequency of the memory module will reduce performance.

At the same time, if the motherboard does not support a frequency of 800 MHz, and while a DDR2-800 module is installed, then it will work, but at a lower frequency.

Caches

The processor memory cache primarily affects when working with CAD systems, large databases and graphics. A cache is a memory with a faster access speed, designed to speed up access to data contained permanently in memory with a slower access speed (hereinafter referred to as “main memory”). Caching is used by CPUs, hard drives, browsers, and web servers.

When the CPU accesses data, the cache is examined first. If an entry with an identifier matching the identifier of the requested data item is found in the cache, then the data items in the cache are used. This case is called a cache hit. If no entries are found in the cache containing the requested data element, then it is read from main memory into the cache and becomes available for subsequent access. This case is called a cache miss. The percentage of cache hits where a result is found is called the hit rate or cache hit ratio.
The percentage of cache hits is higher for Intel processors.

All CPUs differ in the number of caches (up to 3) and their size. The fastest cache is the first level (L1), the slowest is the third (L3). Only AMD Phenom processors have L3 cache. So it is very important that the L1 cache has a large size.

We tested the dependence of performance on cache memory size. If you compare the results of the 3D shooters Prey and Quake 4, which are typical gaming applications, the performance difference between 1 and 4 MB is approximately the same as between processors with a difference in frequency of 200 MHz. The same applies to video encoding tests for DivX 6.6 and XviD 1.1.2 codecs, as well as the WinRAR 3.7 archiver. However, CPU-intensive applications like 3DStudio Max 8, Lame MP3 Encoder, or MainConcept's H.264 Encoder V2 don't benefit much from larger cache sizes.
Let us remember that the L2 cache has a much greater impact on the performance of the Intel Core 2 CPU than the AMD Athlon 64 X2 or Phenom, since Intel has a common L2 cache for all cores, while AMD has a separate one for each core! In this regard, Phenom works better with cache.

RAM

As already mentioned, RAM is characterized by frequency and volume. At the same time, there are now 2 types of memory available, DDR2 and DDR3, which differ in architecture, performance, frequency and supply voltage - that is, everything!
The frequency of the memory module must match the frequency of the module itself.

The amount of RAM also affects the performance of the operating system and resource-intensive applications.
The calculations are simple - Windows XP takes up 300-350 MB of RAM after loading. If there are additional programs in startup, they also load RAM. That is, 150-200 MB remain free. Only light office applications can fit there.
For comfortable work with AutoCAD, graphics applications, 3DMax, coding and graphics, at least 1 GB of RAM is required. If you use Windows Vista, then at least 2 GB.

Graphics subsystem

Office computers often use motherboards that have built-in graphics. Motherboards on such chipsets (G31, G45, AMD 770G, etc.) have the letter G in their markings.
These integrated graphics cards use some of the RAM for video memory, thereby reducing the amount of RAM space available to the user.

Accordingly, to increase performance, the built-in video card must be disabled in the motherboard BIOS, and an external (discrete) video card must be installed in the PCI-Express slot.
All video cards differ in the graphics chipset, the operating frequency of its pipelines, the number of pipelines, the video memory frequency, and the video memory bus width.

Storage subsystem

The performance of drives is greatly affected when accessing large amounts of data - video, audio, as well as when opening a large number of small files.

Among the technical characteristics that affect the speed of access to files, it should be noted the type of hard drive interface (HDD) - parallel IDE or serial SATA and SATA-2 and hard drive cache - 8, 16 or 32 MB.
At the moment, it is recommended to install hard drives only with the SATA-2 interface, which has the highest bandwidth and the largest cache.

Software performance factors:

  1. Number of installed programs
  2. File system fragmentation
  3. File system errors, bad sectors
  4. OS registry fragmentation
  5. OS registry errors
  6. Page file size (virtual memory size)
  7. Included OS GUI visualization elements
  8. Windows programs and services loading at startup

This is not a complete list, but these are the features of Windows OS that can greatly slow down its operation.
But we will talk about these characteristics, settings and parameters in the next article.

CPU is a core computing component that greatly influences the performance of a computer. But how much does gaming performance depend on the processor? Should you change your processor to improve gaming performance? What kind of increase will this give? We will try to find the answer to these questions in this article.

1. What to change the video card or processor

Not long ago, I again encountered a lack of computer performance and it became clear that it was time for another upgrade. At that time my configuration was as follows:

  • Phenom II X4 945 (3 GHz)
  • 8 GB DDR2 800 MHz
  • GTX 660 2 GB

Overall, I was quite happy with the computer’s performance, the system worked quite quickly, most games ran on high or medium/high graphics settings, and I didn’t edit videos that often, so 15-30 minutes of rendering didn’t bother me.

The first problems arose in the game World of Tanks, when changing graphics settings from high to medium did not give the expected performance increase. The frame rate periodically dropped from 60 to 40 FPS. It became clear that performance was limited by the processor. Then it was decided to go up to 3.6 GHz, which solved the problems in WoT.

But time passed, new heavy games were released, and from WoT I switched to one that was more demanding of system resources (Armata). The situation repeated itself and the question became what to change - the video card or the processor. There was no point in changing the GTX 660 to a 1060; you should have at least taken a GTX 1070. But the old Phenom definitely wouldn’t have been able to handle such a video card. And even when changing the settings in Armata, it was clear that the performance was again limited by the processor. Therefore, it was decided to first replace the processor with a transition to a more productive Intel platform for games.

Replacing the processor entailed replacing the motherboard and RAM. But there was no other way out; besides, there was hope that a more powerful processor would allow the old video card to perform more fully in processor-dependent games.

2. Processor selection

There were no Ryzen processors at that time; their release was only expected. In order to fully evaluate them, it was necessary to wait for their release and mass testing to identify strengths and weaknesses.

In addition, it was already known that the price at the time of their release would be quite high and it was necessary to wait about another six months until the prices for them became more adequate. There was no desire to wait that long, just as there was no desire to quickly switch to the still crude AM4 platform. And, given AMD’s eternal blunders, it was also risky.

Therefore, Ryzen processors were not considered and preference was given to the already proven, polished and well-proven Intel platform on socket 1151. And, as practice has shown, not in vain, since Ryzen processors turned out to be worse in games, and in other performance tasks I already had enough performance .

At first the choice was between Core i5 processors:

  • Core i5-6600
  • Core i5-7600
  • Core i5-6600K
  • Core i5-7600K

For a mid-range gaming computer, the i5-6600 was the minimum option. But in the future, I wanted to have some reserve in case of replacing the video card. The Core i5-7600 was not very different, so the original plan was to purchase a Core i5-6600K or Core i5-7600K with the ability to overclock to a stable 4.4 GHz.

But, having read the test results in modern games, where the load on these processors was close to 90%, it was clear that in the future they might not be enough. But I wanted to have a good platform with a reserve for a long time, since the days when you could upgrade your PC every year are gone

So I started looking at Core i7 processors:

  • Core i7-6700
  • Core i7-7700
  • Core i7-6700K
  • Core i7-7700K

In modern games they are not yet fully loaded, but somewhere around 60-70%. But, the Core i7-6700 has a base frequency of only 3.4 GHz, and the Core i7-7700 has not much more - 3.6 GHz.

According to test results in modern games with top video cards, the greatest performance increase is observed at around 4 GHz. Then it is no longer so significant, sometimes almost invisible.

Despite the fact that i5 and i7 processors are equipped with auto-overclocking technology (), you shouldn’t count on it too much, since in games where all cores are used, the increase will be insignificant (only 100-200 MHz).

Thus, the Core i7-6700K (4 GHz) and i7-7700K (4.2 GHz) processors are more optimal, and given the possibility of overclocking to a stable 4.4 GHz, they are also significantly more promising than the i7-6700 (3.4 GHz) and i7-7700 (3.6 GHz), since the difference in frequency will already be 800-1000 MHz!

At the time of the upgrade, Intel 7th generation processors (Core i7-7xxx) had just appeared and were significantly more expensive than 6th generation processors (Core i7-6xxx), the prices of which had already begun to decline. At the same time, in the new generation they updated only the built-in graphics, which are not needed for games. And their overclocking capabilities are almost the same.

In addition, motherboards with new chipsets were also more expensive (although you can install a processor on an older chipset, this may pose some problems).

Therefore, it was decided to take the Core i7-6700K with a base frequency of 4 GHz and the ability to overclock to a stable 4.4 GHz in the future.

3. Selecting a motherboard and memory

I, like most enthusiasts and technical experts, prefer high-quality and stable motherboards from ASUS. For the Core i7-6700K processor with overclocking capabilities, the best option is motherboards based on the Z170 chipset. In addition, I wanted to have a better built-in sound card. Therefore, it was decided to take the most inexpensive gaming motherboard from ASUS on the Z170 chipset -.

The memory, taking into account the motherboard’s support for module frequencies up to 3400 MHz, also wanted to be faster. For a modern gaming PC, the best option is a 2x8 GB DDR4 memory kit. All that remained was to find the optimal set in terms of price/frequency ratio.

Initially, the choice fell on AMD Radeon R7 (2666 MHz), since the price was very tempting. But at the time of ordering, it was not in stock. I had to choose between the much more expensive G.Skill RipjawsV (3000 MHz) and the slightly less expensive Team T-Force Dark (2666 MHz).

It was a difficult choice, since I wanted faster memory, and funds were limited. Based on tests in modern games (which I studied), the performance difference between 2133 MHz and 3000 MHz memory was 3-13% and an average of 6%. It's not much, but I wanted to get the maximum.

But the fact is that fast memory is made by factory overclocking slower chips. G.Skill RipjawsV memory (3000 MHz) is no exception and, to achieve this frequency, its supply voltage is 1.35 V. In addition, processors have a hard time digesting memory with too high a frequency and already at a frequency of 3000 MHz the system may not work stably. Well, increased supply voltage leads to faster wear (degradation) of both memory chips and the processor controller (Intel officially announced this).

At the same time, Team T-Force Dark memory (2666 MHz) operates at a voltage of 1.2 V and, according to the manufacturer, allows the voltage to increase to 1.4 V, which, if desired, will allow you to overclock it manually. After weighing all the pros and cons, the choice was made in favor of memory with a standard voltage of 1.2 V.

4. Gaming performance tests

Before switching platforms, I performed performance tests on the old system in some games. After changing the platform, the same tests were repeated.

The tests were performed on a clean Windows 7 system with the same video card (GTX 660) at high graphics settings, since the goal of replacing the processor was to increase performance without reducing image quality.

To achieve more accurate results, only games with a built-in benchmark were used in the tests. As an exception, a performance test in the online tank shooter Armored Warfare was carried out by recording a replay and then playing it back with readings using Fraps.

High graphics settings.

Test on Phenom X4 (@3.6 GHz).

The test results show that the average FPS changed slightly (from 36 to 38). This means that the performance in this game depends on the video card. However, the minimum FPS drops in all tests have decreased significantly (from 11-12 to 21-26), which means the game will still be a little more comfortable.

In hopes of improving performance with DirectX 12, I later did a test in Windows 10.

But the results were even worse.

Batman: Arkham Knight

High graphics settings.

Test on Phenom X4 (@3.6 GHz).

Test on Core i7-6700K (4.0 GHz).

The game is very demanding on both the video card and the processor. From the tests it is clear that replacing the processor led to a significant increase in average FPS (from 14 to 23), and a decrease in minimum drawdowns (from 0 to 15), the maximum value also increased (from 27 to 37). However, these indicators do not allow for comfortable gaming, so I decided to run tests with medium settings and disable various effects.

Medium graphics settings.

Test on Phenom X4 (@3.6 GHz).

Test on Core i7-6700K (4.0 GHz).

At medium settings, the average FPS also increased slightly (from 37 to 44), and drawdowns decreased significantly (from 22 to 35), exceeding the minimum threshold of 30 FPS for a comfortable game. The gap in the maximum value also remained (from 50 to 64). As a result of changing the processor, playing became quite comfortable.

Switching to Windows 10 changed absolutely nothing.

Deus Ex: Mankind Divided

High graphics settings.

Test on Phenom X4 (@3.6 GHz).

Test on Core i7-6700K (4.0 GHz).

The result of replacing the processor was only a decrease in FPS drawdowns (from 13 to 18). Unfortunately, I forgot to run tests with medium settings, but I did test on DirectX 12.

As a result, the minimum FPS only dropped.

Armored Warfare: Armata Project

I play this game often and it has become one of the main reasons for upgrading my computer. At high settings, the game produced 40-60 FPS with rare but unpleasant drops to 20-30.

Reducing the settings to medium eliminated serious drops, but the average FPS remained almost the same, which is an indirect sign of a lack of processor performance.

A replay was recorded and tests were performed in playback mode using FRAPS at high settings.

I summarized their results in a table.

CPU FPS (min) FPS (Wednesday) FPS (Max)
Phenom X4 (@3.6 GHz) 28 51 63
Core i7-6700K (4.0 GHz) 57 69 80

Replacing the processor completely eliminated critical FPS drops and seriously increased the average frame rate. This made it possible to enable vertical synchronization, making the picture smoother and more pleasant. At the same time, the game produces a stable 60 FPS without drops and is very comfortable to play.

Other games

I have not conducted tests, but in general a similar picture is observed in most online and processor-dependent games. The processor seriously affects FPS in online games such as Battlefield 1 and Overwatch. And also in open world games like GTA 5 and Watch Dogs.

For the sake of experiment, I installed GTA 5 on an old PC with a Phenom processor and a new one with a Core i7. If earlier, with high settings, FPS stayed within 40-50, now it stably stays above 60 with virtually no drawdowns and often reaches 70-80. These changes are noticeable to the naked eye, but an armed one simply extinguishes everyone

5. Rendering performance test

I don't do much video editing and only ran one simple test. I rendered a Full HD video with a length of 17:22 and a volume of 2.44 GB at a lower bitrate in the Camtasia program that I use. The result was a file of 181 MB. The processors completed the task in the following time.

CPU Time
Phenom X4 (@3.6 GHz) 16:34
Core i7-6700K (4.0 GHz) 3:56

Of course, a video card (GTX 660) was involved in the rendering, because I can’t imagine who would think of rendering without a video card, since it takes 5-10 times longer. In addition, the smoothness and speed of playback of effects during editing also very much depends on the video card.

However, the dependence on the processor has not been canceled and the Core i7 coped with this task 4 times faster than the Phenom X4. As the complexity of editing and effects increases, this time can increase significantly. What the Phenom X4 can handle for 2 hours, the Core i7 can handle in 30 minutes.

If you plan to seriously engage in video editing, then a powerful multi-threaded processor and a large amount of memory will significantly save you time.

6. Conclusion

The appetite for modern games and professional applications is growing very quickly, requiring constant investment in upgrading your computer. But if you have a weak processor, then there is no point in changing the video card, it simply will not open it, i.e. Performance will be limited by the processor.

A modern platform based on a powerful processor with sufficient RAM will ensure high performance of your PC for years to come. This reduces the cost of upgrading a computer and eliminates the need to completely replace the PC after a few years.

7. Links

Processor Intel Core i7-8700
Processor Intel Core i5-8400
Intel Core i3 8100 processor