Zhu Xiaohuang: Uncertainty and data reconstruction
Author:21st Century Economic report Time:2022.07.19
Text/Zhu Xiahuang
In the study of heterogeneous equilibrium theory, in -depth discussion of the meaning of uncertainty in the order of human civilization, especially in economic life. With the deepening of this research, the author's thinking and use of the logic and application of data, economic measurement models, and the result of the existing data must be reconstructed under the uncertainty cognitive framework. This article tries to explain it to elaborate Essence
的, deep understanding of the uncertain nature of the world
So far, our understanding of uncertainty needs to be deepened, because humans still live in accidental. Economic research, risk research and other aspects should be mainly to find various causes of cause or possible consequences. Uncertainty is the theoretical source of risk management. Modern risk management is mainly to manage uncertainty, that is, the uncertainty that Wright said, but if it is promoted to the uncertainty layer, it cannot be seen, it cannot Whether the uncertainty of measurement is also a question of risk management should be considered.
As the starting point of modern economics, its invisible hand is actually based on uncertainty. It is based on market prices to allow the market to form a balanced state spontaneously, rather than through self -righteous administrative means. In fact, the disadvantage of the planned economy is that it is the economic operation rules designed based on the premise of certainty.
Why is the essence of the world uncertain? We know that the origin of this world is disorderly. The law of thermodynamics is the law of entropy increase, indicating that everything is increasing entropy, and entropy increase is a disorderly result. The disorderly physical world determines the uncertain nature of the world. The so -called human civilization is that humans have established order, establish rules, and generate knowledge through self -discipline. The establishment of the rules to reduce the increase in entropy, which is essentially the efforts of human beings in terms of confrontation.
In addition to the basis of physics, the difference between human nature is also an important source of uncertainty. There are no two same people in the world, just like there are no two same leaves. The difference in human nature has led to the high uncertainty of human behavior. The self -negativeness of human nature is the deep root of human differences, and the arrogance is determined by genes.
Human beings' cognition of the universe is generally far greater than known. Because the universe and nature are endless, the human cognitive ability is always only one of the vast roles. For humans, the more known, the greater the unknown. This objectively determines the uncertainty of human beings. The ability of human cognition is always limited, and the rules of human civilization are always limited, so the uncertainty of human behavior always exists.
The uncertainty of the inheritance of traditional knowledge and the deviation of use is also amazing. The inheritance of knowledge is deviated, and there will always be a large or small deviation from the origin of the world. The author of "Human Evolution" also said that human beings are like now, and they cannot find the inevitable truth and basis, which are all accidental products. This deviation can be reflected in the calculation formula of probability theory. The probability of incidents is essentially the deviation of judgment of common sense. In the modern industrial civilization society, due to the increasingly wide use of knowledge and technology, more and more cognitive deviation, and the probability of deviation in things is increasing.
Under the framework of uncertainty, we should be soberly aware that humans live in accidental. The inevitable kingdom is expected. As an economist said: humans in the accidental life are dreaming of a inevitable life. This is a cognitive deviation of the direction.
The distribution method and form of uncertainty are diversified. Many of them have a long time span and have relative stability. This provides windows for human survival and civilization. Accumulate knowledge. And other things, especially people's behavior, are accidental. If you lose the restrictions of order, it is the essence of disorder.
框, data classification and reconstruction under uncertain framework
Human behavior generates data, people's decision -making depends on data, and the importance of data is self -evident. In actual economic life, the use of data to observe the status quo and predict the future is the normal state in the macroeconomic, microeconomics and various transaction activities. Various data analysis, data sample determination, variables and constant selection and calculations in economic models are based on data.
In recent years, the digital economy has flourished, and the breadth and depth of data applications are continuously expanding, which has led to the risk of data and its application models that have become an important risk phenomenon that affects the world. Therefore, the use of data should be cautious and should be based on a solid theoretical cognition. However, in the current data use process, especially in the use of measurement models, there are quite a lot of doctrine. The lack of thinking about the root cause of the data and the nature of the properties has caused two types of problems in the actual economic measurement. The marginal data is mixed up, and the data of many accidental, non -repeatable (that is, no expression of the future) is used to build a model predicting the future; As a sample of economic prediction models, financial risk models, and intelligent models, the problem of incompatibility of wind and cow. These two types of problems have intensified, leading to a large number of model distortion or waste of computing power waste. Because the essence of the world is uncertain, under the framework of uncertainty, everything has accidental characteristics and relatively certain characteristics. In the field of natural science, due to a relatively stable natural environment, many natural sciences have produced many natural sciences. Civilization and algorithm, so the data generated by natural sciences is basically repeated and verified. This type of data is instructive in the future, and many phenomena in the humanities are related to human behavior. It has accidental characteristics. Except for the guidance of human civilization rules, other behaviors are difficult to repeat and verify. Based on this uncertainty judgment, the data we can collect all can be collected into historical data and marginal data (ie, terminal data, real -time data). These two types of data are mixed with duplicated data and non -repeated data. That is inevitable data and accidental data. The former is invalid for observation in the future, and the latter is useful for observation status. Clarifying the process from the objective world to data collection, and on this basis, it is a top priority to reconstruct the data in accordance with the principle of uncertainty. The rules of the material field are relatively stable and longer. For example, the rocky weather and volcanic eruptions have left the time window for the laws like Newton's law. In this time window, the experiment is repeated, the data is repeated. of. However, in the field of humanities, many behaviors cannot be repeated. Since it cannot be repeated, why should we use these unreproducible data to establish models to predict the future? Therefore, the existing data must be redefined and classified, and only the data that has repeated space under the premise of uncertainty can guide the future.
Economy and society will have the order of civilization at that time, and this order will not change in the short term. Data during this period can be repeated and the future can be observed. We can be regarded as inevitable data under this relatively determined data, which can be repeatedly verified. However, it is unreasonable to model, refine factors, and calculate the future. This is an important reason why risk management and economic calculations are not reasonable. The essence is that data does not classify and reconstruct according to the principle of uncertainty.
Risk measurement and calculations are the possibility or probability of income and loss in the future, rather than predict what specific events will occur in the future, and they must be based on repetitive data. If you need to study what specific events will happen in the future, according to the principle of uncertainty, the main reason is to study causal relationships, and only real -time data can be used as the main resources. Therefore, you need to review the sources of various types of data.
How to classify the data? The author believes that there are about three forms of data, that is, historical data and marginal data (or real -time data) in time dimensions, repeatable data and uncertain forms of uncertain forms, and non -repeated data, from the stability of humanities and scientific rules, forming different stability Must -inevitable data and accidental data.
The general material relationship is relatively certain. Human behavior is more complicated, there are some duplicate, some cannot be repeated, and the order is repeated, such as the traffic rules are relatively stable. If there is such a point, some problems need to be re -considered. For example, the risk measurement is usually determined by historical data, the probability of breach of contract, the probability of losses, and then establishing a model to calculate the future risk cost. Repeat, but in fact historical data is not all duplicate, is this calculation feasible? Is it credible?
To this end, the author puts forward unique data reconstruction ideas. Starting from time dimension, the data is divided into historical data and terminal data (or marginal data). Third, from the uncertainty, the data is divided into inevitable data (duplicate) and accidental data (non -repeated).
Third, the economic cycle and laws will be recognized again
Starting from the principle of uncertainty, after the data is restructured according to the above, the existence of the economic cycle needs to be re -considered. We currently divide the economic cycle, usually using historical data to observe the peak valley, and use two valley or two peaks as a ⼀ economic cycle, but as mentioned above, all in historical data are repeated can be duplicate. Data, in the long river of history, the order of society and economy is also changing. The data generated under different order, according to the definitions above, are not repeated. Therefore, the data cannot be used or cannot be used directly, so the conclusion that the data that cannot be repeated based on history is not feasible. However, behind the economic cycle issue, the research of economic laws is another problem. Is the formation of the economic cycle inevitable or accidental? If it is accidental, there is no economic law. If it is inevitable, there is an economic law that happens behind it. How can the laws be found? This requires us to select the duplicate parts after the data is divided, and to study repeated data in order to discover the laws.
So so far, it is difficult for the author to determine the existence of the so -called economic cycle, but it can be determined that the causal relationship that can be engaged to predict the upcoming changes.
Fourth, intelligent and marginal data
An important application scenario of data reconstruction is intelligent. Generally, intelligence requires the support of machine learning, and machine learning requires a lot of historical data training, and then applies to the marginal data to make intelligent suggestions, response, and actions. We see that the intelligentization of the factory is relatively easy, because in a relatively independent environment such as the factory, the movements are repeated and the order is stable, so its data can be repeated. Veryh. However, within the scope of society and economy, the environment of its favors is complex and changeable. Although the training required for machine learning may seem huge, the repeated part may be very small. Many interference, which makes it difficult for the machine to find the laws.
In addition, when using marginal data, it is necessary to distinguish whether it can be repeated. Especially now the scale of data is generated every day, all of which belong to marginal data. Although the computing power is rapidly improving in the rapid progress of big data technology, if you can select the duplicate data, and then use the big data technology to process it, it can save a lot of computing power and quickly quickly and quickly quickly. Get more targeted actions.
Therefore, intelligent data structure also needs to be upgraded and optimized.
5. Economic model and quantitative investment
In addition, the application scenario of data reconstruction is economic models and quantitative investment. In the actual operation of quantitative investment, the same strategy is often found. The results trained by different samples may be very different, and the recovery results made by the same strategy may also be very different. The strategy that performs well in each stage may also perform unsatisfactory performance in practical operations. Similarly, there are such problems in similar fields such as measurement models and bank breach models. The root cause of these problems is that there are also changes in the market in the market, and there are also unreproduced parts in their historical data.
In data science, there is a concept of "effectiveness" of data, which is actually similar to the above, but the validity is just a vague and general concept. Can be repeated. With the models made of unreproducible data and predictions, the actual use effect in future applications is worrying. Therefore, how to reconstruct the data, how to eliminate the unreproduced parts in the complicated historical data, is to be a puppet. Very important investment model upgrade problem.
About the author: Zhu Xiaohuang, chairman of the Academic Committee of Montgs Intellectual Tank; Ph.D. in Economics at Sun Yat -sen University, Winners of the State Council; President of the Financial Legal Studies Research Association of the Chinese Care Law Society; Former CITIC Bank President and Chairman of CITIC Group. Dr. Zhu Xiaohuang is an expert in the field of risk research and macroeconomic research.
- END -
Baoshan "Three Treasures"
□ WenyueWhether it is the young lady who has broken the waves or the younger brot...
The healthy development of the young cadres of the young cadres of the Cullen Banner Audit Bureau of Tongliao City provides a "nutrition meal"
Since 2022, the Kelun Banner Audit Bureau of Tongliao City has firmly established the sense of responsibility of who enforces the law and relies on the improvement of the quality and ability of the