Predictive Modeling Pt 14. Theory & Statistical Design.

Updated

Design:

Historical data, analyzed statistically only reveals one thing about the data in relation to the question be sought. The past. We observe or analyze a data set already formed (traditional TA indicators + traditional statistical analyses) to seek out patterns in the PAST data. This is great, but we want to know the FUTURE. So how do we bridge.. historical data to future data using 'now' data. I used historical data, to create parameters that acted as rules to a theoretical 'belief'. ( My modeling is designed from an alternative; to Type A or B uncertainty modeling.) Each Model intricately interacts with one another, and it is hard to tease out the shared variance right now. There is a global pulse and a microstate pulse present in bitcoin.. Some people analyze one or the other to make estimates in predicting an outcome. I know that in neural modeling, you MUST find a global signal and decode (even partially) that signal in order to find a viable microstate signal that is COHERENT to the global model, it helps if you have some prerequisite parameter(s) you are looking for. In this case, i am looking for FUD and FOMO, as well as Market Manipulation, bots, and anomalies. A quick dig through peoples global trend TA's I saw an outlier of interest.. and used it as my primary foundation to make Model A.

Theory:
"Dempster–Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data. "Dempster–Shafer theory" which is a generalization of the Bayesian theory of subjective probability. Belief functions base degrees of belief (or confidence, or trust) for one question on the probabilities for a related question. The degrees of belief itself may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related. Put another way, it is a way of representing epistemic plausibilities but it can yield answers that contradict those arrived at using probability theory.

In a first step, subjective probabilities (masses) (FUD and FOMO) are assigned to all subsets of the frame; usually, only a restricted number of sets will have non-zero mass (focal elements).:39f. Belief in a hypothesis is constituted by the sum of the masses of all sets enclosed by it. It is the amount of belief that directly supports a given hypothesis or a more specific one, forming a lower bound. Belief (usually denoted Bel) measures the strength of the evidence in favor of a proposition p. It ranges from 0 (indicating no evidence) to 1 (denoting certainty). Plausibility is 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. Or, it can be obtained as the sum of the masses of all sets whose intersection with the hypothesis is not empty. It is an upper bound on the possibility that the hypothesis could be true, i.e. it “could possibly be the true state of the system” up to that value, because there is only so much evidence that contradicts that hypothesis. Plausibility (denoted by Pl) is defined to be Pl(p) = 1 − Bel(~p). It also ranges from 0 to 1 and measures the extent to which evidence in favor of ~p leaves room for belief in p.

The idea here being i built my framework on behavioral analysis of statistical outliers defined as FUD or FOMO, market manip, bots, and anomalies in a continuous data set. The evidence to FUD and FOMO is widely documented and talked about. I am just applying it in a statistical prediction model, using my understanding of my 'belief' foundation.
Note
Continuation from second to last paragraph..

For example, suppose we have a belief of 0.5 and a plausibility of 0.8 for a proposition, say “the cat in the box is dead.” This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. “the cat is alive”) only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is “indeterminate,” meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in your system".

This is essentially how i code my modeling sequence. It is rather simple.
Note
snapshot

Looking at the modeling prediction zones, you can see some coherence between models, this is build upon the 'belief foundation, i theoretically created and supported my 'belief' by the constructs of FUD/FOMO and market manipulation.

I took a global pattern trend and stayed within the global pattern while modeling for the microstate trend. This is coherence..
Note
If you like what I am doing.. Show some love :). Be critical. Being silent helps no one.
Note
How do i arrive at a plausibility value..?

TA is very much subjective.. Being subjective is not a bad thing at all, you just gotta know how to properly utilize it scientifically. I am of the VERY strong opinion that subjectivity being excluded from the scholarly research methods we use in the science realm, to be utterly disgraceful and a massive waste of great potential. I do understand fully, why we do this. It is to weed out non-credible vs credible results, information and/or ideas. We have used over a 100 years of subjectivity in psychoanalysis to give rise to understanding mental pathology, and our ability to diagnose these pathologies; just through dialog with another individual. Subjectivity is incorporated into almost EVERY assessment in psychology, sure statistically you control for subjectivity and leave the bias out of it.. But is that really possible? I do not think so. Subjectivity is inherent to behavior and understanding behavior. It is the subjective knowledge of the internal SELF that allows us to connect to the world around us, allows humans to formulate opinions, and engage in healthy dialog. These words are subjective to me, but why care if the presentation of information, ideas or results are not done in an objective manner as society as so dictated it in science.

This statistical modeling design you see manifested before you is subjective to my mind, my knowledge, my ideas, my visions. In order to be taken seriously, i know from past experience in presenting new theoretical modeling to a crowd of hobbyists/professional online; that I needed to re-evaluate my approach on how i explain why my ideas and theories. Every idea should at least pondered (even for a moment) in Tradingview (or anywhere).

After reading my charts if you thought, "this dude is nuts. He has no idea what he is doing and acting like he can just come in here with a new idea, preach some shit called geometric linear regression modeling and say it can predict future trends". Well there are valid points there. There are people who do this for their profession and are in the top of their game posting subjectivity that is well more educated in

This site literally had an entire section dedicated to IDEAS', at any given moment you can see people's idea's on what they SUBJECTIVELY think is going to occur. My idea is NO BETTER than any other idea.. That is unless , results can be OBSERVED, VALIDATED, and REPRODUCED. And a included narrative to each move being made, so others can try it too.. If others can get similar results using the method then your method is worthy of a second ponder by those who are VERY interested in such stuff.

Why the fuck should anyone care about what I have to say? Well you shouldn't honestly. I don't know the long term validity of what i am doing.. I just wanted to try something new.

Plausibility value is determined per model in the modeling sequence, based on a lower boundary +/- FUD, an upper boundary +/- FOMO, and a prediction projection cone based on, current and pasts trends; these stem from geometric indicators present in the background of the global and microstate data algorithms i see visually.

*Posted to update because that took a long ass time to type for some reason.
snapshot
Beyond Technical AnalysisChart PatternsTrend Analysis

Also on: