Information reliability control
The credibility of information is an important factor on which both public safety and governance are based. Reliable data ensure the correct operation of APCS systems at enterprises and the adoption of objective decisions in courts.
Inaccurate information increases the risk of man-made accidents at enterprises of the oil refining, chemical and fuel industries. Objects, where there is a possibility of accumulation of explosive or toxic substances, install systems of increased reliability that control the occurrence of risk factors. At large critical infrastructure facilities, the number of sensors reaches tens of thousands. Sensitive sensors record the slightest changes in parameters, for example, temperature or pressure drops. Distortion of the information transmitted by the sensors leads to incorrect operation of the protection systems and, as a result, to accidents.
Another example is inaccurate accounting records or prospectuses of companies. Exchange transactions concluded on the basis of inaccurate information often lead to financial losses. Lack of reliability of, say, the testimony of a witness can lead to a miscarriage of justice.
The practice has developed both formal-logical algorithms and hardware methods for checking the reliability of information; there are also special software products.
Signs of reliability
Informational integrity and authenticity of data includes:
- the legibility of the information message received in any form;
- absence of false or distorted information;
- the minimum possibility of erroneous perception of information units, both lexical and digital, and electronic;
- coincidence of data coming from different sources.
Distortion of information can be complete and partial. To determine the degree of accuracy, several criteria are used: "reliable in full", "predominantly reliable", "completely unreliable", "status is not determined." The reliability of the source becomes an important criterion for assessing the reliability of information. For example, media “sensations” often come from a completely irrelevant source, based on an unidentified blog post, or refer to non-existent individuals. The Collins English Dictionary recognized the phrase fake news as a 2017 phrase of the year, suggesting widespread information with a reduced level of reliability.
Credibility must be distinguished from relevance. The information may be absolutely accurate for a certain period of time, but lose its relevance, so that it becomes impossible to make a decision based on it.
Sources of information are also ranked according to the degree of reliability. In the first place in terms of the quality of information are the data of state bodies. However, information from official sources may turn out to be false. For example, the official US statistics on inflation and unemployment were questioned by billionaire Paul Singer, head of hedge fund Elliott Management. In 2014, in the United States, GDP indicators for the first quarter were distorted by three percentage points: at first, the indicator was 0.1%, and then changed to -2.9%. Similar risks are typical for these public institutions.
A specialist from a certain field can be recognized as competent, but he is not always an authentic source of information. Links to data from research institutes, statistical studies, scientific developments may be relevant in a number of cases. The reference to "British scientists" has become classic - an example of how false information is thrown into the public sphere under the guise of an undefined category of sources.
Data obtained from various sources should be compared with each other. This allows you to identify distortions associated with the transfer or interpretation of information. The more sources provide identical data, the more reliability becomes. For example, in the case of an automated control system, this means that the identity of the data coming from the same sensors is analyzed. And if the coincidence of duplicate values was initially assumed, then the difference will indicate a failure or malfunction of the sensors. The method of comparing the readings of identical sources in this case becomes one of the main methods for assessing the degree of accuracy of information.
Industries and objects for which information reliability control is relevant
Increased attention to the reliability of information is primarily shown by:
- companies that work with potentially dangerous equipment and technologies, where reliable information about the state of systems and facilities, the presence of dangerous technogenic factors is important;
- financial sector companies that make decisions based on analysis of the quality of reporting and other information about counterparties and customers;
- government services that analyze statistical or accounting reports; distortion of data can lead to incorrect decisions of government bodies;
- courts, including arbitration courts, which make decisions, assessing the reliability of the information provided by the parties to the adversarial process.
The complexity of information reliability control lies in the fact that the use of hardware methods is available only for data that has a standardized format and can be compared with prototypes. This is possible in cases with signals from ACS systems, with strictly defined arrays of accounting or financial data, with the results of scientific experiments that can be modeled. In the case of a subjective, even expert, assessment of the reliability of information, there is a risk of incorrectly assessing the degree of truth of the data.
Problems causing a decrease in confidence
Decrease in the quality of information reliability can be intentional or accidental. Accidental factors include system or hardware failures, loss of data during transmission, and misinterpretation. Distortions during transmission and data input are also distinguished; in some cases, the presence of distortions is detected by software. Deliberate distortion of information is more common and has, as a rule, the nature of intent to mislead.
The reliability of the information is confirmed in various ways. It is good if it is possible to check the data arrays by software or by comparison with the original source. The need to use additional methods arises when decisions are to be made on the basis of the information provided, for example, when evidence is assessed in court. Expertise often becomes this way. However, the resonant story of a "drunk" six-year-old boy , in whose blood forensic experts found 2.7 ppm of alcohol, also serves as an example that expert data can be false.
Validity control techniques
Algorithms for working with data reliability, which are also used in software, are divided into syntactic, semantic and pragmatic.
Syntactic methods check the order of required data and details, the correspondence of the type of document fields to the data entered in them, the presence of prohibited characters. The programs perform control based on the document descriptions entered in them.
Semantic control checks the logic of insertion and the absence of contradictions in the data among themselves. For example, the indicator of the size of balance sheet assets in the corresponding line should exceed the indicator of the size of borrowed liabilities.
Pragmatic control checks the timeliness, completeness and required density of data entry. It is necessary to establish whether the information is reliable and complete enough to make a decision. An example is cash flow analysis, which should look at data for each year and for each source, rather than selectively. Carried out only for output, final documents.
Validity control is applied at all stages of data processing. In various situations, visual and software control methods can be used. If possible, visual control is used even before the start of document processing by software.
Software control methods check documents, a record or a group of records, requisites, individual files.
State bodies that receive mandatory reporting in electronic form have software tools for the initial verification of the submitted reports for compliance with formal and logical requirements. This is the first stage of validation. Another means that confirms the absence of distortion of information is an EDS - an electronic digital signature.
In situations with the control of the data transmitted by the sensors, the problem is solved by installing special software specially developed or adapted for a specific ACS.
When transferring data using various protocols, algorithms such as TWICE are used, which perform control tasks at the application, transport, network, physical and data link levels. TWICE enhances the reliability of the transmitted information , mainly by detecting packet loss, but does not detect the error itself.
It is impossible to completely solve the problem of information reliability. Modern methods of program control partially remove the question for formalized information that can be described with sufficient accuracy. For subjective sources, the problem is not yet solvable.