Range Filter [DW]This is an experimental study designed to filter out minor price action for a clearer view of trends.
Inspired by the QQE's volatility filter, this filter applies the process directly to price rather than to a smoothed RSI.
First, a smooth average price range is calculated for the basis of the filter and multiplied by a specified amount.
Next, the filter is calculated by gating price movements that do not exceed the specified range.
Lastly the target ranges are plotted to display the prices that will trigger filter movement.
Custom bar colors are included. The color scheme is based on the filtered price trend.
Indicators and strategies
Delta Volume Columns Pro [LucF]█ OVERVIEW
This indicator displays volume delta information calculated with intrabar inspection on historical bars, and feed updates when running in realtime. It is designed to run in a pane and can display either stacked buy/sell volume columns or a signal line which can be calculated and displayed in many different ways.
Five different models are offered to reveal different characteristics of the calculated volume delta information. Many options are offered to visualize the calculations, giving you much leeway in morphing the indicator's visuals to suit your needs. If you value delta volume information, I hope you will find the time required to master Delta Volume Columns Pro well worth the investment. I am confident that if you combine a proper understanding of the indicator's information with an intimate knowledge of the volume idiosyncrasies on the markets you trade, you can extract useful market intelligence using this tool.
█ WARNINGS
1. The indicator only works on markets where volume information is available,
Please validate that your symbol's feed carries volume information before asking me why the indicator doesn't plot values.
2. When you refresh your chart or re-execute the script on the chart, the indicator will repaint because elapsed realtime bars will then recalculate as historical bars.
3. Because the indicator uses different modes of calculation on historical and realtime bars, it's critical that you understand the differences between them. Details are provided further down.
4. Calculations using intrabar inspection on historical bars can only be done from some chart timeframes. See further down for a list of supported timeframes.
If the chart's timeframe is not supported, no historical volume delta will display.
█ CONCEPTS
Chart bars
Three different types of bars are used in charts:
1. Historical bars are bars that have already closed when the script executes on them.
2. The realtime bar is the current, incomplete bar where a script is running on an open market. There is only one active realtime bar on your chart at any given time.
The realtime bar is where alerts trigger.
3. Elapsed realtime bars are bars that were calculated when they were realtime bars but have since closed.
When a script re-executes on a chart because the browser tab is refreshed or some of its inputs are changed, elapsed realtime bars are recalculated as historical bars.
Why does this indicator use two modes of calculation?
Historical bars on TradingView charts contain OHLCV data only, which is insufficient to calculate volume delta on them with any level of precision. To mine more detailed information from those bars we look at intrabars , i.e., bars from a smaller timeframe (we call it the intrabar timeframe ) that are contained in one chart bar. If your chart Is running at 1D on a 24x7 market for example, most 1D chart bars will contain 24 underlying 1H bars in their dilation. On historical bars, this indicator looks at those intrabars to amass volume delta information. If the intrabar is up, its volume goes in the Buy bin, and inversely for the Sell bin. When price does not move on an intrabar, the polarity of the last known movement is used to determine in which bin its volume goes.
In realtime, we have access to price and volume change for each update of the chart. Because a 1D chart bar can be updated tens of thousands of times during the day, volume delta calculations on those updates is much more precise. This precision, however, comes at a price:
— The script must be running on the chart for it to keep calculating in realtime.
— If you refresh your chart you will lose all accumulated realtime calculations on elapsed realtime bars, and the realtime bar.
Elapsed realtime bars will recalculate as historical bars, i.e., using intrabar inspection, and the realtime bar's calculations will reset.
When the script recalculates elapsed realtime bars as historical bars, the values on those bars will change, which means the script repaints in those conditions.
— When the indicator first calculates on a chart containing an incomplete realtime bar, it will count ALL the existing volume on the bar as Buy or Sell volume,
depending on the polarity of the bar at that point. This will skew calculations for that first bar. Scripts have no access to the history of a realtime bar's previous updates,
and intrabar inspection cannot be used on realtime bars, so this is the only to go about this.
— Even if alerts only trigger upon confirmation of their conditions after the realtime bar closes, they are repainting alerts
because they would perhaps not have calculated the same way using intrabar inspection.
— On markets like stocks that often have different EOD and intraday feeds and volume information,
the volume's scale may not be the same for the realtime bar if your chart is at 1D, for example,
and the indicator is using an intraday timeframe to calculate on historical bars.
— Any chart timeframe can be used in realtime mode, but plots that include moving averages in their calculations may require many elapsed realtime bars before they can calculate.
You might prefer drastically reducing the periods of the moving averages, or using the volume columns mode, which displays instant values, instead of the line.
Volume Delta Balances
This indicator uses a variety of methods to evaluate five volume delta balances and derive other values from those balances. The five balances are:
1 — On Bar Balance : This is the only balance using instant values; it is simply the subtraction of the Sell volume from the Buy volume on the bar.
2 — Average Balance : Calculates a distinct EMA for both the Buy and Sell volumes, and subtracts the Sell EMA from the Buy EMA.
3 — Momentum Balance : Starts by calculating, separately for both Buy and Sell volumes, the difference between the same EMAs used in "Average Balance" and
an SMA of double the period used for the "Average Balance" EMAs. The difference for the Sell side is subtracted from the difference for the Buy side,
and an RSI of that value is calculated and brought over the −50/+50 scale.
4 — Relative Balance : The reference values used in the calculation are the Buy and Sell EMAs used in the "Average Balance".
From those, we calculate two intermediate values using how much the instant Buy and Sell volumes on the bar exceed their respective EMA — but with a twist.
If the bar's Buy volume does not exceed the EMA of Buy volume, a zero value is used. The same goes for the Sell volume with the EMA of Sell volume.
Once we have our two intermediate values for the Buy and Sell volumes exceeding their respective MA, we subtract them. The final "Relative Balance" value is an ALMA of that subtraction.
The rationale behind using zero values when the bar's Buy/Sell volume does not exceed its EMA is to only take into account the more significant volume.
If both instant volume values exceed their MA, then the difference between the two is the signal's value.
The signal is called "relative" because the intermediate values are the difference between the instant Buy/Sell volumes and their respective MA.
This balance flatlines when the bar's Buy/Sell volumes do not exceed their EMAs, which makes it useful to spot areas where trader interest dwindles, such as consolidations.
The smaller the period of the final value's ALMA, the more easily you will see the balance flatline. These flat zones should be considered no-trade zones.
5 — Percent Balance : This balance is the ALMA of the ratio of the "On Bar Balance" value, i.e., the volume delta balance on the bar (which can be positive or negative),
over the total volume for that bar.
From the balances and marker conditions, two more values are calculated:
1 — Marker Bias : It sums the up/down (+1/‒1) occurrences of the markers 1 to 4 over a period you define, so it ranges from −4 to +4, times the period.
Its calculation will depend on the modes used to calculate markers 3 and 4.
2 — Combined Balances : This is the sum of the bull/bear (+1/−1) states of each of the five balances, so it ranges from −5 to +5.
█ FEATURES
The indicator has two main modes of operation: Columns and Line .
Columns
• In Columns mode you can display stacked Buy/Sell volume columns.
• The buy section always appears above the centerline, the sell section below.
• The top and bottom sections can be colored independently using eight different methods.
• The EMAs of the Buy/Sell values can be displayed (these are the same EMAs used to calculate the "Average Balance").
Line
• Displays one of seven signals: the five balances or one of two complementary values, i.e., the "Marker Bias" or the "Combined Balances".
• You can color the line and its fill using independent calculation modes to pack more information in the display.
You can thus appraise the state of 3 different values using the line itself, its color and the color of its fill.
• A "Divergence Levels" feature will use the line to automatically draw expanding levels on divergence events.
Default settings
Using the indicator's default settings, this is the information displayed:
• The line is calculated on the "Average Balance".
• The line's color is determined by the bull/bear state of the "Percent Balance".
• The line's fill gradient is determined by the advances/declines of the "Momentum Balance".
• The orange divergence dots are calculated using discrepancies between the polarity of the "On Bar Balance" and the chart's bar.
• The divergence levels are determined using the line's level when a divergence occurs.
• The background's fill gradient is calculated on advances/declines of the "Marker Bias".
• The chart bars are colored using advances/declines of the "Relative Balance". Divergences are shown in orange.
• The intrabar timeframe is automatically determined from the chart's timeframe so that a minimum of 50 intrabars are used to calculate volume delta on historical bars.
Alerts
The configuration of the marker conditions explained further is what determines the conditions that will trigger alerts created from this script. Note that simply selecting the display of markers does not create alerts. To create an alert on this script, you must use ALT-A from the chart. You can create multiple alerts triggering on different conditions from this same script; simply configure the markers so they define the trigger conditions for each alert before creating the alert. The configuration of the script's inputs is saved with the alert, so from then on you can change them without affecting the alert. Alert messages will mention the marker(s) that triggered the specific alert event. Keep in mind, when creating alerts on small chart timeframes, that discrepancies between alert triggers and markers displayed on your chart are to be expected. This is because the alert and your chart are running two distinct instances of the indicator on different servers and different feeds. Also keep in mind that while alerts only trigger on confirmed conditions, they are calculated using realtime calculation mode, which entails that if you refresh your chart and elapsed realtime bars recalculate as historical bars using intrabar inspection, markers will not appear in the same places they appeared in realtime. So it's important to understand that even though the alert conditions are confirmed when they trigger, these alerts will repaint.
Let's go through the sections of the script's inputs.
Columns
The size of the Buy/Sell columns always represents their respective importance on the bar, but the coloring mode for tops and bottoms is independent. The default setup uses a standard coloring mode where the Buy/Sell columns are always in the bull/bear color with a higher intensity for the winning side. Seven other coloring modes allow you to pack more information in the columns. When choosing to color the top columns using a bull/bear gradient on "Average Balance", for example, you will have bull/bear colored tops. In order for the color of the bottom columns to continue to show the instant bar balance, you can then choose the "On Bar Balance — Dual Solid Colors" coloring mode to make those bars the color of the winning side for that bar. You can display the averages of the Buy and Sell columns. If you do, its coloring is controlled through the "Line" and "Line fill" sections below.
Line and Line fill
You can select the calculation mode and the thickness of the line, and independent calculations to determine the line's color and fill.
Zero Line
The zero line can display dots when all five balances are bull/bear.
Divergences
You first select the detection mode. Divergences occur whenever the up/down direction of the signal does not match the up/down polarity of the bar. Divergences are used in three components of the indicator's visuals: the orange dot, colored chart bars, and to calculate the divergence levels on the line. The divergence levels are dynamic levels that automatically build from the line's values on divergence events. On consecutive divergences, the levels will expand, creating a channel. This implementation of the divergence levels corresponds to my view that divergences indicate anomalies, hesitations, points of uncertainty if you will. It precludes any attempt to identify a directional bias to divergences. Accordingly, the levels merely take note of divergence events and mark those points in time with levels. Traders then have a reference point from which they can evaluate further movement. The bull/bear/neutral colors used to plot the levels are also congruent with this view in that they are determined by the line's position relative to the levels, which is how I think divergences can be put to the most effective use. One of the coloring modes for the line's fill uses advances/declines in the line after divergence events.
Background
The background can show a bull/bear gradient on six different calculations. As with other gradients, you can adjust its brightness to make its importance proportional to how you use it in your analysis.
Chart bars
Chart bars can be colored using seven different methods. You have the option of emptying the body of bars where volume does not increase, as does my TLD indicator, and you can choose whether you want to show divergences.
Intrabar Timeframe
This is the intrabar timeframe that will be used to calculate volume delta using intrabar inspection on historical bars. You can choose between four modes. The three "Auto-steps" modes calculate, from the chart's timeframe, the intrabar timeframe where the said number of intrabars will make up the dilation of chart bars. Adjustments are made for non-24x7 markets. "Fixed" mode allows you to select the intrabar timeframe you want. Checking the "Show TF" box will display in the lower-right corner the intrabar timeframe used at any given moment. The proper selection of the intrabar timeframe is important. It must achieve maximal granularity to produce precise results while not unduly slowing down calculations, or worse, causing runtime errors. Note that historical depth will vary with the intrabar timeframe. The smaller the timeframe, the shallower historical plots you will be.
Markers
Markers appear when the required condition has been confirmed on a closed bar. The configuration of the markers when you create an alert is what determines when the alert will trigger. Five markers are available:
• Balances Agreement : All five balances are either bullish or bearish.
• Double Bumps : A double bump is two consecutive up/down bars with +/‒ volume delta, and rising Buy/Sell volume above its average.
• Divergence confirmations : A divergence is confirmed up/down when the chosen balance is up/down on the previous bar when that bar was down/up, and this bar is up/down.
• Balance Shifts : These are bull/bear transitions of the selected signal.
• Marker Bias Shifts : Marker bias shifts occur when it crosses into bull/bear territory.
Periods
Allows control over the periods of the different moving averages used to calculate the balances.
Volume Discrepancies
Stock exchanges do not report the same volume for intraday and daily (or higher) resolutions. Other variations in how volume information is reported can also occur in other markets, namely Forex, where volume irregularities can even occur between different intraday timeframes. This will cause discrepancies between the total volume on the bar at the chart's timeframe, and the total volume calculated by adding the volume of the intrabars in that bar's dilation. This does not necessarily invalidate the volume delta information calculated from intrabars, but it tells us that we are using partial volume data. A mechanism to detect chart vs intrabar timeframe volume discrepancies is provided. It allows you to define a threshold percentage above which the background will indicate a difference has been detected.
Other Settings
You can control here the display of the gray dot reminder on realtime bars, and the display of error messages if you are using a chart timeframe that is not greater than the fixed intrabar timeframe, when you use that mode. Disabling the message can be useful if you only use realtime mode at chart timeframes that do not support intrabar inspection.
█ RAMBLINGS
On Volume Delta
Volume is arguably the best complement to interpret price action, and I consider volume delta to be the most effective way of processing volume information. In periods of low-volatility price consolidations, volume will typically also be lower than normal, but slight imbalances in the trend of the buy/sell volume balance can sometimes help put early odds on the direction of the break from consolidation. Additionally, the progression of the volume imbalance can help determine the proximity of the breakout. I also find volume delta and the number of divergences very useful to evaluate the strength of trends. In trends, I am looking for "slow and steady", i.e., relatively low volatility and pauses where price action doesn't look like world affairs are being reassessed. In my personal mythology, this type of trend is often more resilient than high-volatility breakouts, especially when volume balance confirms the general agreement of traders signaled by the low-volatility usually accompanying this type of trend. The volume action on pauses will often help me decide between aggressively taking profits, tightening a stop or going for a longer-term movement. As for reversals, they generally occur in high-volatility areas where entering trades is more expensive and riskier. While the identification of counter-trend reversals fascinates many traders to no end, they represent poor opportunities in my view. Volume imbalances often precede reversals, but I prefer to use volume delta information to identify the areas following reversals where I can confirm them and make relatively low-cost entries with better odds.
On "Buy/Sell" Volume
Buying or selling volume are misnomers, as every unit of volume transacted is both bought and sold by two different traders. While this does not keep me from using the terms, there is no such thing as “buy only” or “sell only” volume. Trader lingo is riddled with peculiarities.
Divergences
The divergence detection method used here relies on a difference between the direction of a signal and the polarity (up/down) of a chart bar. When using the default "On Bar Balance" to detect divergences, however, only the bar's volume delta is used. You may wonder how there can be divergences between buying/selling volume information and price movement on one bar. This will sometimes be due to the calculation's shortcomings, but divergences may also occur in instances where because of order book structure, it takes less volume to increase the price of an asset than it takes to decrease it. As usual, divergences are points of interest because they reveal imbalances, which may or may not become turning points. To your pattern-hungry brain, the divergences displayed by this indicator will — as they do on other indicators — appear to often indicate turnarounds. My opinion is that reality is generally quite sobering and I have no reliable information that would tend to prove otherwise. Exercise caution when using them. Consequently, I do not share the overwhelming enthusiasm of traders in identifying bullish/bearish divergences. For me, the best course of action when a divergence occurs is to wait and see what happens from there. That is the rationale underlying how my divergence levels work; they take note of a signal's level when a divergence occurs, and it's the signal's behavior from that point on that determines if the post-divergence action is bullish/bearish.
Superfluity
In "The Bed of Procrustes", Nassim Nicholas Taleb writes: To bankrupt a fool, give him information . This indicator can display lots of information. While learning to use a new indicator inevitably requires an adaptation period where we put it through its paces and try out all its options, once you have become used to it and decide to adopt it, rigorously eliminate the components you don't use and configure the remaining ones so their visual prominence reflects their relative importance in your analysis. I tried to provide flexible options for traders to control this indicator's visuals for that exact reason — not for window dressing.
█ LIMITATIONS
• This script uses a special characteristic of the `security()` function allowing the inspection of intrabars — which is not officially supported by TradingView.
It has the advantage of permitting a more robust calculation of volume delta than other methods on historical bars, but also has its limits.
• Intrabar inspection only works on some chart timeframes: 3, 5, 10, 15 and 30 minutes, 1, 2, 3, 4, 6, and 12 hours, 1 day, 1 week and 1 month.
The script’s code can be modified to run on other resolutions.
• When the difference between the chart’s timeframe and the intrabar timeframe is too great, runtime errors will occur. The Auto-Steps selection mechanisms should avoid this.
• All volume is not created equally. Its source, components, quality and reliability will vary considerably with sectors and instruments.
The higher the quality, the more reliably volume delta information can be used to guide your decisions.
You should make it your responsibility to understand the volume information provided in the data feeds you use. It will help you make the most of volume delta.
█ NOTES
For traders
• The Data Window shows key values for the indicator.
• While this indicator displays some of the same information calculated in my Delta Volume Columns ,
I have elected to make it a separate publication so that traders continue to have a simpler alternative available to them. Both code bases will continue to evolve separately.
• All gradients used in this indicator determine their brightness intensities using advances/declines in the signal—not their relative position in a pre-determined scale.
• Volume delta being relative, by nature, it is particularly well-suited to Forex markets, as it filters out quite elegantly the cyclical volume data characterizing the sector.
If you are interested in volume delta, consider having a look at my other "Delta Volume" indicators:
• Delta Volume Realtime Action displays realtime volume delta and tick information on the chart.
• Delta Volume Candles builds volume delta candles on the chart.
• Delta Volume Columns is a simpler version of this indicator.
For coders
• I use the `f_c_gradientRelativePro()` from the PineCoders Color Gradient Framework to build my gradients.
This function has the advantage of allowing begin/end colors for both the bull and bear colors. It also allows us to define the number of steps allowed for each gradient.
I use this to modulate the gradients so they perform optimally on the combination of the signal used to calculate advances/declines,
but also the nature of the visual component the gradient applies to. I use fewer steps for choppy signals and when the gradient is used on discrete visual components
such as volume columns or chart bars.
• I use the PineCoders Coding Conventions for Pine to write my scripts.
• I used functions modified from the PineCoders MTF Selection Framework for the selection of timeframes.
█ THANKS TO:
— The devs from TradingView's Pine and other teams, and the PineCoders who collaborate with them. They are doing amazing work,
and much of what this indicator does could not be done without their recent improvements to Pine.
— A guy called Kuan who commented on a Backtest Rookies presentation of their Volume Profile indicator using a `for` loop.
This indicator started from the intrabar inspection technique illustrated in Kuan's snippet.
— theheirophant , my partner in the exploration of the sometimes weird abysses of `security()`’s behavior at intrabar timeframes.
— midtownsk8rguy , my brilliant companion in mining the depths of Pine graphics.
BERLIN CandlesA problem with Heikin Ashi is that while it gives you a great overview of overall direction, it is rarely possible to use it as a replacement for normal japanese
candlesticks. The reason for this is that actual price data is lost, since the candles are more akin to a moving average than a different way to see price action. Also, with Heikin Ashi, most of the actual price action is lost, because the candles can be bigger than the high and low of the underlying japanese candlestick.
With BERLIN Candles I have tried to fix that problem. By using a smoothed out version of the previous Heikin Ashi candle close as the current BERLIN Candle open, the high and low of the actual japanese candlestick for the high and low of the BERLIN Candle, and the current Heikin Ashi close as the BERLIN Candle close, while setting hard limits for BERLIN Candle open and close values so that they can never exceed the high and low of the underlying japanese candlestick.
One problem still persists though. The actual current price data is lost. However, the BERLIN Candles have solved this by adding a fifth part to the candles. The close of the underlying japanese candlesticks are indicated with a plus-sign. This way, actual price data is never lost, while keeping all of the other benefits of this type of candles.
A few added bonuses:
The addition of the 14 period ATR at the latest candle
The baseline from Ichimoku is included as an option
The 14 period ATR value of each candle can be seen in the indicator data as
the orange value
Support Resistance ChannelsHello All,
For Long time I was planning to make Support/Resistance Channels script, finally I had time and here it is.
How this script works?
- it finds and keeps Pivot Points
- when it found a new Pivot Point it clears older S/R channels then;
- for each pivot point it searches all pivot points in its own channel with dynamic width
- while creating the S/R channel it calculates its strength
- then sorts all S/R channels by strength
- it shows the strongest S/R channels, before doing this it checks old location in the list and adjust them for better visibility
- if any S/R channel was broken on last move then it gives alert and put shape below/above the candle
- The colors of the S/R channels are adjusted automatically
You can set/change following settings:
- Pivot Period
- Source : High/Low or Close/Open can be used
- Maximum Channel Width %: this is the maximum channel width rate, this is calculated using Highest/Lowest levels in last 300 bars
- Number of S/R to show : this is the number of Strongest S/R to show
- Loopback Period: While calculating S/R levels it checks Pivot Points in LoopBack Period
- Show S/R on last # Bars: To see S/R levels only on last N bars
- Start Date: the script starts calculating Pivot Point from this date, the reason I put this option is for visuality. Explained below
- You can set colors/transparency
- and You can enable/disable shapes for broken S/R levels
Examples:
You can change colors as you wish:
here " Show S/R on last # Bars " set 100:
Sometimes visuality may corrupt because of old S/R levels, to solve it you need to set "Start Date" in the options to start the script in visual part (last 292 bars)
here in first screenshot it doesn't look good (shrink), then on second screenshot I set the "Start Date" it looks better, if you change time frame don't forget to set it again :)
Enjoy!
MathSpecialFunctionsConvolve1DLibrary "MathSpecialFunctionsConvolve1D"
Convolution is one of the most important mathematical operations used in signal processing. This simple mathematical operation pops up in many scientific and industrial applications, from its use in a billion-layer large CNN to simple image denoising.
___
Reference:
www.algorithm-archive.org
numpy.org
lloydrochester.com
www.geeksforgeeks.org
f(signal, filter)
Convolve
Parameters:
signal (array) : List with signal data.
filter (array) : List with weights to apply to the signal data.
Returns: Discrete, linear convolution of `signal` and `filter`.
ErrorFunctionsLibrary "ErrorFunctions"
A collection of functions used to approximate the area beneath a Gaussian curve.
Because an ERF (Error Function) is an integral, there is no closed-form solution to calculating the area beneath the curve. Meaning all ERFs are approximations; precisely wrong, but mostly accurate. How close you need to get to the actual area depends entirely on your use case, with more precision being less efficient.
The internal precision of floats in Pine Script is 1e-16 (16 decimals, aka. double precision). This library adapts well known algorithms designed to efficiently reach double precision. Single precision alternates are also included. All of them were made free to use, modify, and distribute by their original authors.
HASTINGS
Adaptation of a single precision ERF by Cecil Hastings Jr, published through Princeton University in 1955. It was later documented by Abramowitz and Stegun as equation 7.1.26 in their 1972 Handbook of Mathematical Functions. Fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
GILES
Adaptation of a single precision Inverse ERF by Michael Giles, published through the University of Oxford in 2012. It reverses the ERF, estimating an X coordinate from an area. It too is fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
LIBC
Adaptation of the double precision ERF & ERFC in the standard C library (aka. libc). It is also the same ERF & ERFC that SciPy uses. While not quite as efficient as the Hastings approximation, it's still very fast and fully maximizes Pines precision.
BOOST
Adaptation of the double precision Inverse ERF & Inverse ERFC in the Boost Math C++ library. SciPy uses these as well. These reverse the ERF & ERFC, estimating an X coordinate from an area. It too isn't quite as efficient as the Giles approximation, but still fast and fully maximizes Pines precision.
While these algorithms are not exported directly, they are available through their exported counterparts.
- - -
ERROR FUNCTIONS
erf(x, precise)
An Error Function estimates the theoretical error of a measurement.
Parameters:
x (float) : (float) Upper limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -1 and 1.
erfc(x, precise)
A Complementary Error Function estimates the difference between a theoretical error and infinity.
Parameters:
x (float) : (float) Lower limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 2.
erfinv(x, precise)
An Inverse Error Function reverses the erf() by estimating the original measurement from the theoretical error.
Parameters:
x (float) : (float) Theoretical error.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
erfcinv(x, precise)
An Inverse Complementary Error Function reverses the erfc() by estimating the original measurement from the difference between the theoretical error and infinity.
Parameters:
x (float) : (float) Difference between the theoretical error and infinity.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
- - -
DISTRIBUTION FUNCTIONS
pdf(x, m, s)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
x (float) : (float) X coordinate for which a density will be estimated.
m (float) : (float) Mean
s (float) : (float) Sigma
Returns: (float) Between 0 and ∞.
cdf(z, precise)
A Cumulative Distribution Function estimates the area under a Gaussian curve between negative infinity and the Z Score.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
cdfinv(a, precise)
An Inverse Cumulative Distribution Function reverses the cdf() by estimating the Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -∞ and +∞
cdfab(z1, z2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Gaussian curve between two Z Scores (A and B).
Parameters:
z1 (float) : (float) First Z Score.
z2 (float) : (float) Second Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ttt(z, precise)
A Two-Tailed Test estimates the area under a Gaussian curve between symmetrical ± Z scores and ± infinity.
Parameters:
z (float) : (float) One of the symmetrical Z Scores.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
tttinv(a, precise)
An Inverse Two-Tailed Test reverses the ttt() by estimating the absolute Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.
ott(z, precise)
A One-Tailed Test estimates the area under a Gaussian curve between an absolute Z Score and infinity.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ottinv(a, precise)
An Inverse One-Tailed Test Reverses the ott() by estimating the Z Score from a an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.
Trend Volatility Index (TVI)Trend Volatility Index (TVI)
A robust nonparametric oscillator for structural trend volatility detection
⸻
What is this?
TVI is a volatility oscillator designed to measure the strength and emergence of price trends using nonparametric statistics.
It calculates a U-statistic based on the Gini mean difference across multiple simple moving averages.
This allows for objective, robust, and unbiased quantification of trend volatility in tick-scale values.
⸻
What can it do?
• Quantify trend strength as a continuous value aligned with tick price scale
• Detect trend breakouts and volatility expansions
• Identify range-bound market states
• Detect early signs of new trends with minimal lag
⸻
What can’t it do?
• Predict future price levels
• Predict trend direction before confirmation
⸻
How it works
TVI computes a nonparametric dispersion metric (Gini mean difference) from multiple SMAs of different lengths.
As this metric shares the same dimension as price ticks, it can be directly interpreted on the chart as a volatility gauge.
The output is plotted using candlestick-style charts to enhance visibility of change rate and trend behavior.
⸻
Disclaimer
TVI does not predict price. It is a structural indicator designed to support discretionary judgment.
Trading carries inherent risk, and this tool does not guarantee profitability. Use at your own discretion.
⸻
Innovation
This indicator introduces a novel approach to trend volatility by applying U-statistics over time series
to produce a nonparametric, unbiased, and robust estimate of structural volatility.
日本語要約
Trend Volatility Index (TVI) は、ノンパラメトリックなU統計量(Gini平均差)を使ってトレンドの強度を客観的に測定することを目的に開発されたボラティリティ・オシレーターです。
ティック単位で連続的に変化し、トレンドのブレイク・レンジ・初動の予兆を定量的に検出します。
未来の価格や方向は予測せず、現在の構造的ばらつきだけをロバストに評価します。
Cointegration Heatmap & Spread Table [EdgeTerminal]The Cointegration Heatmap is a powerful visual and quantitative tool designed to uncover deep, statistically meaningful relationships between assets.
Unlike traditional indicators that react to price movement, this tool analyzes the underlying statistical relationship between two time series and tracks when they diverge from their long-term equilibrium — offering actionable signals for mean-reversion trades .
What Is Cointegration?
Most traders are familiar with correlation, which measures how two assets move together in the short term. But correlation is shallow — it doesn’t imply a stable or predictable relationship over time.
Cointegration, however, is a deeper statistical concept: Two assets are cointegrated if a linear combination of their prices or returns is stationary , even if the individual series themselves are non-stationary.
Cointegration is a foundational concept in time series analysis, widely used by hedge funds, proprietary trading firms, and quantitative researchers. This indicator brings that institutional-grade concept into an easy-to-use and fully visual TradingView indicator.
This tool helps answer key questions like:
“Which stocks tend to move in sync over the long term?”
“When are two assets diverging beyond statistical norms?”
“Is now the right time to short one and long the other?”
Using a combination of regression analysis, residual modeling, and Z-score evaluation, this indicator surfaces opportunities where price relationships are stretched and likely to snap back — making it ideal for building low-risk, high-probability trade setups.
In simple terms:
Cointegrated assets drift apart temporarily, but always come back together over time. This behavior is the foundation of successful pairs trading.
How the Indicator Works
Cointegration Heatmap indicator works across any market supported on TradingView — from stocks and ETFs to cryptocurrencies and forex pairs.
You enter your list of symbols, choose a timeframe, and the indicator updates every bar with live cointegration scores, spread signals, and trade-ready insights.
Indicator Settings:
Symbol list: a customizable list of symbols separated by commas
Returns timeframe: time frame selection for return sampling (Weekly or Monthly)
Max periods: max periods to limit the data to a certain time and to control indicator performance
This indicator accomplishes three major goals in one streamlined package:
Identifies stable long-term relationships (cointegration) between assets, using a heatmap visualization.
Tracks the spread — the difference between actual prices and the predicted linear relationship — between each pair.
Generates trade signals based on Z-score deviations from the mean spread, helping traders know when a pair is statistically overextended and likely to mean revert.
The math:
Returns are calculated using spread tickers to ensure alignment in time and adjust for dividends, splits, and other inconsistencies.
For each unique pair of symbols, we perform a linear regression
Yt=α+βXt+ε
Then we compute the residuals (errors from the regression):
Spreadt=Yt−(α+βXt)
Calculate the standard deviation of the spread over a moving window (default: 100 samples) and finally, define the Cointegration Score:
S=1/Standard Deviation of Residuals
This means, the lower the deviation, the tighter the relationship, so higher scores indicate stronger cointegration.
Always remember that cointegration can break down so monitor the asset over time and over multiple different timeframes before making a decision.
How to use the indicator
The heatmap table:
The indicator displays 2 very important tables, one in the middle and one on the right side. After entering your symbols, the first table to pay attention to is the middle heatmap table.
Any assets with a cointegration value of 25% is something to pay attention to and have a strong and stable relationship. Anything below is weak and not tradable.
Additionally, the 40% level is another important line to cross. Assets that have a cointegration score of over 40% will most likely have an extremely strong relationship.
Think about it this way, the higher the percentage, the tighter and more statistically reliable the relationship is.
The spread table:
After finding a good asset pair using heatmap, locate the same pair in the spread table (right side).
Here’s what you’ll see on the table:
Spread: Current difference between the two symbols based on the regression fit
Mean: Historical average of that spread
Z-score: How far current spread is from the mean in standard deviations
Signal: Trade suggestion: Short, Long, or Neutral
Since you’re expecting mean reversion, the idea is that the spread will return to the average. You want to take a trade when the z-score is either over +2 or below -2 and exit when z-score returns to near 0.
You will usually see the trade suggestion on the spread chart but you can make your own decision based on your risk level.
Keep in mind that the Z-score for each pair refers to how off the first asset is from the mean compared to the second one, so for example if you see STOCKA vs STOCKB with a Z-score of -1.55, we are regressing STOCKB (Y) on STOCKA (X).
In this case, STOCKB is the quoted asset and STOCKA is the base asset.
In this case, this means that STOCKB is much lower than expected relative to STOCKA, so the trade would be a long position on stock B and short position on stock A.
Bloomberg Financial Conditions Index (Proxy)The Bloomberg Financial Conditions Index (BFCI): A Proxy Implementation
Financial conditions indices (FCIs) have become essential tools for economists, policymakers, and market participants seeking to quantify and monitor the overall state of financial markets. Among these measures, the Bloomberg Financial Conditions Index (BFCI) has emerged as a particularly influential metric. Originally developed by Bloomberg L.P., the BFCI provides a comprehensive assessment of stress or ease in financial markets by aggregating various market-based indicators into a single, standardized value (Hatzius et al., 2010).
The original Bloomberg Financial Conditions Index synthesizes approximately 50 different financial market variables, including money market indicators, bond market spreads, equity market valuations, and volatility measures. These variables are normalized using a Z-score methodology, weighted according to their relative importance to overall financial conditions, and then aggregated to produce a composite index (Carlson et al., 2014). The resulting measure is centered around zero, with positive values indicating accommodative financial conditions and negative values representing tighter conditions relative to historical norms.
As Angelopoulou et al. (2014) note, financial conditions indices like the BFCI serve as forward-looking indicators that can signal potential economic developments before they manifest in traditional macroeconomic data. Research by Adrian et al. (2019) demonstrates that deteriorating financial conditions, as measured by indices such as the BFCI, often precede economic downturns by several months, making these indices valuable tools for predicting changes in economic activity.
Proxy Implementation Approach
The implementation presented in this Pine Script indicator represents a proxy of the original Bloomberg Financial Conditions Index, attempting to capture its essential features while acknowledging several significant constraints. Most critically, while the original BFCI incorporates approximately 50 financial variables, this proxy version utilizes only six key market components due to data accessibility limitations within the TradingView platform.
These components include:
Equity market performance (using SPY as a proxy for S&P 500)
Bond market yields (using TLT as a proxy for 20+ year Treasury yields)
Credit spreads (using the ratio between LQD and HYG as a proxy for investment-grade to high-yield spreads)
Market volatility (using VIX directly)
Short-term liquidity conditions (using SHY relative to equity prices as a proxy)
Each component is transformed into a Z-score based on log returns, weighted according to approximated importance (with weights derived from literature on financial conditions indices by Brave and Butters, 2011), and aggregated into a composite measure.
Differences from the Original BFCI
The methodology employed in this proxy differs from the original BFCI in several important ways. First, the variable selection is necessarily limited compared to Bloomberg's comprehensive approach. Second, the proxy relies on ETFs and publicly available indices rather than direct market rates and spreads used in the original. Third, the weighting scheme, while informed by academic literature, is simplified compared to Bloomberg's proprietary methodology, which may employ more sophisticated statistical techniques such as principal component analysis (Kliesen et al., 2012).
These differences mean that while the proxy BFCI captures the general direction and magnitude of financial conditions, it may not perfectly replicate the precision or sensitivity of the original index. As Aramonte et al. (2013) suggest, simplified proxies of financial conditions indices typically capture broad movements in financial conditions but may miss nuanced shifts in specific market segments that more comprehensive indices detect.
Practical Applications and Limitations
Despite these limitations, research by Arregui et al. (2018) indicates that even simplified financial conditions indices constructed from a limited set of variables can provide valuable signals about market stress and future economic activity. The proxy BFCI implemented here still offers significant insight into the relative ease or tightness of financial conditions, particularly during periods of market stress when correlations among financial variables tend to increase (Rey, 2015).
In practical applications, users should interpret this proxy BFCI as a directional indicator rather than an exact replication of Bloomberg's proprietary index. When the index moves substantially into negative territory, it suggests deteriorating financial conditions that may precede economic weakness. Conversely, strongly positive readings indicate unusually accommodative financial conditions that might support economic expansion but potentially also signal excessive risk-taking behavior in markets (López-Salido et al., 2017).
The visual implementation employs a color gradient system that enhances interpretation, with blue representing neutral conditions, green indicating accommodative conditions, and red signaling tightening conditions—a design choice informed by research on optimal data visualization in financial contexts (Few, 2009).
References
Adrian, T., Boyarchenko, N. and Giannone, D. (2019) 'Vulnerable Growth', American Economic Review, 109(4), pp. 1263-1289.
Angelopoulou, E., Balfoussia, H. and Gibson, H. (2014) 'Building a financial conditions index for the euro area and selected euro area countries: what does it tell us about the crisis?', Economic Modelling, 38, pp. 392-403.
Aramonte, S., Rosen, S. and Schindler, J. (2013) 'Assessing and Combining Financial Conditions Indexes', Finance and Economics Discussion Series, Federal Reserve Board, Washington, D.C.
Arregui, N., Elekdag, S., Gelos, G., Lafarguette, R. and Seneviratne, D. (2018) 'Can Countries Manage Their Financial Conditions Amid Globalization?', IMF Working Paper No. 18/15.
Brave, S. and Butters, R. (2011) 'Monitoring financial stability: A financial conditions index approach', Economic Perspectives, Federal Reserve Bank of Chicago, 35(1), pp. 22-43.
Carlson, M., Lewis, K. and Nelson, W. (2014) 'Using policy intervention to identify financial stress', International Journal of Finance & Economics, 19(1), pp. 59-72.
Few, S. (2009) Now You See It: Simple Visualization Techniques for Quantitative Analysis. Analytics Press, Oakland, CA.
Hatzius, J., Hooper, P., Mishkin, F., Schoenholtz, K. and Watson, M. (2010) 'Financial Conditions Indexes: A Fresh Look after the Financial Crisis', NBER Working Paper No. 16150.
Kliesen, K., Owyang, M. and Vermann, E. (2012) 'Disentangling Diverse Measures: A Survey of Financial Stress Indexes', Federal Reserve Bank of St. Louis Review, 94(5), pp. 369-397.
López-Salido, D., Stein, J. and Zakrajšek, E. (2017) 'Credit-Market Sentiment and the Business Cycle', The Quarterly Journal of Economics, 132(3), pp. 1373-1426.
Rey, H. (2015) 'Dilemma not Trilemma: The Global Financial Cycle and Monetary Policy Independence', NBER Working Paper No. 21162.
TASC 2025.06 Cybernetic Oscillator█ OVERVIEW
This script implements the Cybernetic Oscillator introduced by John F. Ehlers in his article "The Cybernetic Oscillator For More Flexibility, Making A Better Oscillator" from the June 2025 edition of the TASC Traders' Tips . It cascades two-pole highpass and lowpass filters, then scales the result by its root mean square (RMS) to create a flexible normalized oscillator that responds to a customizable frequency range for different trading styles.
█ CONCEPTS
Oscillators are indicators widely used by technical traders. These indicators swing above and below a center value, emphasizing cyclic movements within a frequency range. In his article, Ehlers explains that all oscillators share a common characteristic: their calculations involve computing differences . The reliance on differences is what causes these indicators to oscillate about a central point.
The difference between two data points in a series acts as a highpass filter — it allows high frequencies (short wavelengths) to pass through while significantly attenuating low frequencies (long wavelengths). Ehlers demonstrates that a simple difference calculation attenuates lower-frequency cycles at a rate of 6 dB per octave. However, the difference also significantly amplifies cycles near the shortest observable wavelength, making the result appear noisier than the original series. To mitigate the effects of noise in a differenced series, oscillators typically smooth the series with a lowpass filter, such as a moving average.
Ehlers highlights an underlying issue with smoothing differenced data to create oscillators. He postulates that market data statistically follows a pink spectrum , where the amplitudes of cyclic components in the data are approximately directly proportional to the underlying periods. Specifically, he suggests that cyclic amplitude increases by 6 dB per octave of wavelength.
Because some conventional oscillators, such as RSI, use differencing calculations that attenuate cycles by only 6 dB per octave, and market cycles increase in amplitude by 6 dB per octave, such calculations do not have a tangible net effect on larger wavelengths in the analyzed data. The influence of larger wavelengths can be especially problematic when using these oscillators for mean reversion or swing signals. For instance, an expected reversion to the mean might be erroneous because oscillator's mean might significantly deviate from its center over time.
To address the issues with conventional oscillator responses, Ehlers created a new indicator dubbed the Cybernetic Oscillator. It uses a simple combination of highpass and lowpass filters to emphasize a specific range of frequencies in the market data, then normalizes the result based on RMS. The process is as follows:
Apply a two-pole highpass filter to the data. This filter's critical period defines the longest wavelength in the oscillator's passband.
Apply a two-pole SuperSmoother (lowpass filter) to the highpass-filtered data. This filter's critical period defines the shortest wavelength in the passband.
Scale the resulting waveform by its RMS. If the filtered waveform follows a normal distribution, the scaled result represents amplitude in standard deviations.
The oscillator's two-pole filters attenuate cycles outside the desired frequency range by 12 dB per octave. This rate outweighs the apparent rate of amplitude increase for successively longer market cycles (6 dB per octave). Therefore, the Cybernetic Oscillator provides a more robust isolation of cyclic content than conventional oscillators. Best of all, traders can set the periods of the highpass and lowpass filters separately, enabling fine-tuning of the frequency range for different trading styles.
█ USAGE
The "Highpass period" input in the "Settings/Inputs" tab specifies the longest wavelength in the oscillator's passband, and the "Lowpass period" input defines the shortest wavelength. The oscillator becomes more responsive to rapid movements with a smaller lowpass period. Conversely, it becomes more sensitive to trends with a larger highpass period. Ehlers recommends setting the smallest period to a value above 8 to avoid aliasing. The highpass period must not be smaller than the lowpass period. Otherwise, it causes a runtime error.
The "RMS length" input determines the number of bars in the RMS calculation that the indicator uses to normalize the filtered result.
This indicator also features two distinct display styles, which users can toggle with the "Display style" input. With the "Trend" style enabled, the indicator plots the oscillator with one of two colors based on whether its value is above or below zero. With the "Threshold" style enabled, it plots the oscillator as a gray line and highlights overbought and oversold areas based on the user-specified threshold.
Below, we show two instances of the script with different settings on an equities chart. The first uses the "Threshold" style with default settings to pass cycles between 20 and 30 bars for mean reversion signals. The second uses a larger highpass period of 250 bars and the "Trend" style to visualize trends based on cycles spanning less than one year:
Dynamic Volume Clusters with Retest Signals (Zeiierman)█ Overview
The Dynamic Volume Clusters with Retest Signals indicator is designed to detect key Volume Clusters and provide Retest Signals. This tool is specifically engineered for traders looking to capitalize on volume-based trends, reversals, and key price retest points.
The indicator seamlessly combines volume analysis, dynamic cluster calculations, and retest signal logic to present a comprehensive trading framework. It adapts to market conditions, identifying clusters of volume activity and signaling when the price retests critical zones.
█ How It Works
⚪ Volume Cluster Detection
The indicator dynamically calculates volume clusters by analyzing the highest and lowest price points within a specified lookback period.
Cluster Logic:
Bright Lines (Strong Red/Green):
These indicate that the price has frequently revisited these levels, creating a dense cluster.
Such areas serve as support or resistance, where significant historical trading has occurred, often acting as barriers to price movement.
Traders should consider these levels as potential reversal zones or consolidation points.
Faded or Darker Lines:
These lines indicate areas where the price has less historical activity, suggesting weaker clustering.
These zones have less market memory and are more likely to break, supporting trend continuation and rapid price movement.
⚪ Candle Color Logic (Market Memory)
Blue Candles (High Cluster Density):
Candles turn blue when the price has revisited a particular area many times.
This signals a highly clustered zone, likely to act as a barrier, creating consolidation or range phases.
These areas indicate strong market memory, potentially rejecting price attempts to break through.
Green or Red Candles (Low Cluster Density):
Once the price breaks out of these dense clusters, the candles turn green (bullish) or red (bearish).
This suggests the price has moved into a less clustered territory, where the path forward is clearer and trends are likely to extend without immediate resistance.
⚪ Retest Signal Logic
The indicator identifies critical retest points where the price crosses a cluster boundary and then reverses. These points are essential for traders looking to catch continuation or reversal setups.
⚪ Dynamic Price Clustering
The indicator dynamically adapts the clustering logic based on price movement and volume shifts.
Uses a dynamic moving average (VPMA) to maintain adaptive cluster levels.
Integrates a Kalman Filter for smoothing, reducing noise, and improving trend clarity.
Automatically updates as new data is received, keeping the clusters relevant in real-time.
█ How to Use
⚪ Trend Following & Reversal Detection
Use Retest signals to identify potential trend continuation or reversal points.
⚪ Trading Volume Clusters and Market Memory
Identify Key Zones:
Focus on bright, saturated cluster lines (strong red or green) as they indicate high market memory, where price has spent significant time in the past.
These zones are likely to exhibit a more choppy market. Apply range or mean reversion strategies.
Spot Potential Breakouts:
Faded or darker cluster lines indicate areas of low market memory, where the price has moved quickly and spent less time.
Use these areas to identify possible trend setups, as they represent lower resistance to price movement.
⚪ Interpreting Candle Colors for Market Phases
Blue Candles (High Cluster Density):
When candles turn blue, it signals that the price has revisited this area multiple times, creating a dense cluster.
These zones often trap price movement, leading to consolidations or range phases.
Use these areas as caution zones, where price can slow down or reverse.
Green or Red Candles (Low Cluster Density):
Once the price breaks out of these clustered zones, the candles turn green (bullish) or red (bearish), indicating lower market memory.
This signals a trend initiation with less immediate resistance, ideal for momentum and breakout trades.
Use these signals to identify emerging trends and ride the momentum.
█ Settings
Range Lookback Period: Sets the number of bars for calculating the range.
Zone Width (% of Range): Determines how wide the volume clusters are relative to the calculated range.
Volume Line Colors: Customize the appearance of bullish and bearish lines.
Retest Signals: Toggle the appearance of Triangle Up/Down retest markers.
Minimum Bars for Retest: Define the minimum number of bars required before a retest is valid.
Maximum Bars for Retest: Set the maximum number of bars within which a retest can occur.
Price Cluster Period: Adjusts the sensitivity of the dynamic clustering logic.
Cluster Confirmation: Controls how tightly the clusters respond to price action.
Price Cluster Start/Peak: Sets the minimum and maximum touches required to fully form a cluster.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
Multi-Layer Volume Profile [BigBeluga]A powerful multi-resolution volume analysis tool that stacks multiple profiles of historical trading activity to reveal true market structure.
This indicator breaks down total and delta volume distribution across time at four adjustable depths — enabling traders to spot major POCs, volume shelves, and zones of price acceptance or rejection with unmatched clarity.
🔵 KEY FEATURES
Multi-Layer Volume Profiles:
Up to 4 separate volume profiles are stacked on the chart:
- Profile 1: Full period
- Profile 2: Half-length
- Profile 3: Quarter-length
- Profile 4: One-eighth-length
This layering helps traders assess confluence across different time horizons.
Custom Bin Resolution:
Each profile uses a customizable number of bins to control visual precision.
More bins = higher granularity, fewer bins = smoother profile.
Precise POC Highlighting:
The price level with the maximum traded volume in each profile is highlighted with a thick blue POC line.
This key level shows the most accepted price for each period.
Total and Delta Volume Labels:
- Total Volume: Displays cumulative volume over the profile period at the top of the profile box.
- Delta Volume: The difference between bullish and bearish volume is labeled at the base, showing directional pressure.
Positive delta = buyer dominance, negative delta = seller dominance.
Range Levels:
Each profile includes horizontal reference lines showing its high, low, bounds.
These edges often align with price reaction zones and become future resistance/support.
🔵 HOW IT WORKS
For each active profile, the indicator:
- Collects price range (highs/lows) across the selected `length`
- Divides this range into equal bins
- Assigns volume into bins based on candle close location
- Aggregates volume per bin to form the profile (polylines)
Separately tracks:
- Total volume (sum of all candles in range)
- Delta volume (sum of candle volumes: positive for bullish, negative for bearish closes)
Highlights the bin with maximum volume (POC)
and marks it with a thick blue line.
Adds auxiliary lines for high/low of each profile box
and total/delta volume tags with tooltips.
🔵 USAGE
Spot Acceptance Zones:
Thick, flat areas on the profile show where price stayed longest — ideal for building positions.
Identify Rejection Zones:
Thin volume areas signal price rejection and are often used for stop placement or entries.
Delta Confirmation:
Use strong positive/negative delta readings as directional bias confirmation for breakout trades.
Confluence Detection:
Watch for overlapping POCs between layers to identify extremely strong support/resistance zones.
🔵 CONCLUSION
Multi-Layer Volume Profile equips traders with a deeply layered market structure view.
Whether you're scalping intraday levels or analyzing macro support zones, the ability to stack volume perspectives, visualize directional delta, and anchor POCs provides an edge in anticipating market moves.
Use this tool to validate entries, confirm structure, and make more informed, volume-aware trading decisions.
pymath█ OVERVIEW
This library ➕ enhances Pine Script's built-in types (`float`, `int`, `array`, `array`) with mathematical methods, mirroring 🪞 many functions from Python's `math` module. Import this library to overload or add to built-in capabilities, enabling calls like `myFloat.sin()` or `myIntArray.gcd()`.
█ CONCEPTS
This library wraps Pine's built-in `math.*` functions and implements others where necessary, expanding the mathematical toolkit available within Pine Script. It provides a more object-oriented approach to mathematical operations on core data types.
█ HOW TO USE
• Import the library: i mport kaigouthro/pymath/1
• Call methods directly on variables: myFloat.sin() , myIntArray.gcd()
• For raw integer literals, you MUST use parentheses: `(1234).factorial()`.
█ FEATURES
• **Infinity Handling:** Includes `isinf()` and `isfinite()` for robust checks. Uses `POS_INF_PROXY` to represent infinity.
• **Comprehensive Math Functions:** Implements a wide range of methods, including trigonometric, logarithmic, hyperbolic, and array operations.
• **Object-Oriented Approach:** Allows direct method calls on `int`, `float`, and arrays for cleaner code.
• **Improved Accuracy:** Some functions (e.g., `remainder()`) offer improved accuracy compared to default Pine behavior.
• **Helper Functions:** Internal helper functions optimize calculations and handle edge cases.
█ NOTES
This library improves upon Pine Script's built-in `math` functions by adding new ones and refining existing implementations. It handles edge cases such as infinity, NaN, and zero values, enhancing the reliability of your Pine scripts. For Speed, it wraps and uses built-ins, as thy are fastest.
█ EXAMPLES
//@version=6
indicator("My Indicator")
// Import the library
import kaigouthro/pymath/1
// Create some Vars
float myFloat = 3.14159
int myInt = 10
array myIntArray = array.from(1, 2, 3, 4, 5)
// Now you can...
plot( myFloat.sin() ) // Use sin() method on a float, using built in wrapper
plot( (myInt).factorial() ) // Factorial of an integer (note parentheses)
plot( myIntArray.gcd() ) // GCD of an integer array
method isinf(self)
isinf: Checks if this float is positive or negative infinity using a proxy value.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) value to check.
Returns: (bool) `true` if the absolute value of `self` is greater than or equal to the infinity proxy, `false` otherwise.
method isfinite(self)
isfinite: Checks if this float is finite (not NaN and not infinity).
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The value to check.
Returns: (bool) `true` if `self` is not `na` and not infinity (as defined by `isinf()`), `false` otherwise.
method fmod(self, divisor)
fmod: Returns the C-library style floating-point remainder of `self / divisor` (result has the sign of `self`).
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) Dividend `x`.
divisor (float) : (float) Divisor `y`. Cannot be zero or `na`.
Returns: (float) The remainder `x - n*y` where n is `trunc(x/y)`, or `na` if divisor is 0, `na`, or inputs are infinite in a way that prevents calculation.
method factorial(self)
factorial: Calculates the factorial of this non-negative integer.
Namespace types: series int, simple int, input int, const int
Parameters:
self (int) : (int) The integer `n`. Must be non-negative.
Returns: (float) `n!` as a float, or `na` if `n` is negative or overflow occurs (based on `isinf`).
method isqrt(self)
isqrt: Calculates the integer square root of this non-negative integer (floor of the exact square root).
Namespace types: series int, simple int, input int, const int
Parameters:
self (int) : (int) The non-negative integer `n`.
Returns: (int) The greatest integer `a` such that a² <= n, or `na` if `n` is negative.
method comb(self, k)
comb: Calculates the number of ways to choose `k` items from `self` items without repetition and without order (Binomial Coefficient).
Namespace types: series int, simple int, input int, const int
Parameters:
self (int) : (int) Total number of items `n`. Must be non-negative.
k (int) : (int) Number of items to choose. Must be non-negative.
Returns: (float) The binomial coefficient nCk, or `na` if inputs are invalid (n<0 or k<0), `k > n`, or overflow occurs.
method perm(self, k)
perm: Calculates the number of ways to choose `k` items from `self` items without repetition and with order (Permutations).
Namespace types: series int, simple int, input int, const int
Parameters:
self (int) : (int) Total number of items `n`. Must be non-negative.
k (simple int) : (simple int = na) Number of items to choose. Must be non-negative. Defaults to `n` if `na`.
Returns: (float) The number of permutations nPk, or `na` if inputs are invalid (n<0 or k<0), `k > n`, or overflow occurs.
method log2(self)
log2: Returns the base-2 logarithm of this float. Input must be positive. Wraps `math.log(self) / math.log(2.0)`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be positive.
Returns: (float) The base-2 logarithm, or `na` if input <= 0.
method trunc(self)
trunc: Returns this float with the fractional part removed (truncates towards zero).
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (int) The integer part, or `na` if input is `na` or infinite.
method abs(self)
abs: Returns the absolute value of this float. Wraps `math.abs()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) The absolute value, or `na` if input is `na`.
method acos(self)
acos: Returns the arccosine of this float, in radians. Wraps `math.acos()`. Input must be between -1 and 1.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be between -1 and 1.
Returns: (float) Angle in radians , or `na` if input is outside or `na`.
method asin(self)
asin: Returns the arcsine of this float, in radians. Wraps `math.asin()`. Input must be between -1 and 1.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be between -1 and 1.
Returns: (float) Angle in radians , or `na` if input is outside or `na`.
method atan(self)
atan: Returns the arctangent of this float, in radians. Wraps `math.atan()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) Angle in radians , or `na` if input is `na`.
method ceil(self)
ceil: Returns the ceiling of this float (smallest integer >= self). Wraps `math.ceil()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (int) The ceiling value, or `na` if input is `na` or infinite.
method cos(self)
cos: Returns the cosine of this float (angle in radians). Wraps `math.cos()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The angle in radians.
Returns: (float) The cosine, or `na` if input is `na`.
method degrees(self)
degrees: Converts this float from radians to degrees. Wraps `math.todegrees()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The angle in radians.
Returns: (float) The angle in degrees, or `na` if input is `na`.
method exp(self)
exp: Returns e raised to the power of this float. Wraps `math.exp()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The exponent.
Returns: (float) `e**self`, or `na` if input is `na`.
method floor(self)
floor: Returns the floor of this float (largest integer <= self). Wraps `math.floor()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (int) The floor value, or `na` if input is `na` or infinite.
method log(self)
log: Returns the natural logarithm (base e) of this float. Wraps `math.log()`. Input must be positive.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be positive.
Returns: (float) The natural logarithm, or `na` if input <= 0 or `na`.
method log10(self)
log10: Returns the base-10 logarithm of this float. Wraps `math.log10()`. Input must be positive.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be positive.
Returns: (float) The base-10 logarithm, or `na` if input <= 0 or `na`.
method pow(self, exponent)
pow: Returns this float raised to the power of `exponent`. Wraps `math.pow()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The base.
exponent (float) : (float) The exponent.
Returns: (float) `self**exponent`, or `na` if inputs are `na` or lead to undefined results.
method radians(self)
radians: Converts this float from degrees to radians. Wraps `math.toradians()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The angle in degrees.
Returns: (float) The angle in radians, or `na` if input is `na`.
method round(self)
round: Returns the nearest integer to this float. Wraps `math.round()`. Ties are rounded away from zero.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (int) The rounded integer, or `na` if input is `na` or infinite.
method sign(self)
sign: Returns the sign of this float (-1, 0, or 1). Wraps `math.sign()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (int) -1 if negative, 0 if zero, 1 if positive, `na` if input is `na`.
method sin(self)
sin: Returns the sine of this float (angle in radians). Wraps `math.sin()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The angle in radians.
Returns: (float) The sine, or `na` if input is `na`.
method sqrt(self)
sqrt: Returns the square root of this float. Wraps `math.sqrt()`. Input must be non-negative.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be non-negative.
Returns: (float) The square root, or `na` if input < 0 or `na`.
method tan(self)
tan: Returns the tangent of this float (angle in radians). Wraps `math.tan()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The angle in radians.
Returns: (float) The tangent, or `na` if input is `na`.
method acosh(self)
acosh: Returns the inverse hyperbolic cosine of this float. Input must be >= 1.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be >= 1.
Returns: (float) The inverse hyperbolic cosine, or `na` if input < 1 or `na`.
method asinh(self)
asinh: Returns the inverse hyperbolic sine of this float.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) The inverse hyperbolic sine, or `na` if input is `na`.
method atanh(self)
atanh: Returns the inverse hyperbolic tangent of this float. Input must be between -1 and 1 (exclusive).
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number. Must be between -1 and 1 (exclusive).
Returns: (float) The inverse hyperbolic tangent, or `na` if input is outside (-1, 1) or `na`.
method cosh(self)
cosh: Returns the hyperbolic cosine of this float.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) The hyperbolic cosine, or `na` if input is `na`.
method sinh(self)
sinh: Returns the hyperbolic sine of this float.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) The hyperbolic sine, or `na` if input is `na`.
method tanh(self)
tanh: Returns the hyperbolic tangent of this float.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The input number.
Returns: (float) The hyperbolic tangent, or `na` if input is `na`.
method atan2(self, dx)
atan2: Returns the angle in radians between the positive x-axis and the point (dx, self). Wraps `math.atan2()`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The y-coordinate `y`.
dx (float) : (float) The x-coordinate `x`.
Returns: (float) The angle in radians , result of `math.atan2(self, dx)`. Returns `na` if inputs are `na`. Note: `math.atan2(0, 0)` returns 0 in Pine.
Optimization: Use built-in math.atan2()
method cbrt(self)
cbrt: Returns the cube root of this float.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The value to find the cube root of.
Returns: (float) The real cube root. Handles negative inputs correctly, or `na` if input is `na`.
method exp2(self)
exp2: Returns 2 raised to the power of this float. Calculated as `2.0.pow(self)`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The exponent.
Returns: (float) `2**self`, or `na` if input is `na` or results in non-finite value.
method expm1(self)
expm1: Returns `e**self - 1`. Calculated as `self.exp() - 1.0`. May offer better precision for small `self` in some environments, but Pine provides no guarantee over `self.exp() - 1.0`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The exponent.
Returns: (float) `e**self - 1`, or `na` if input is `na` or `self.exp()` is `na`.
method log1p(self)
log1p: Returns the natural logarithm of (1 + self). Calculated as `(1.0 + self).log()`. Pine provides no specific precision guarantee for self near zero.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) Value to add to 1. `1 + self` must be positive.
Returns: (float) Natural log of `1 + self`, or `na` if input is `na` or `1 + self <= 0`.
method modf(self)
modf: Returns the fractional and integer parts of this float as a tuple ` `. Both parts have the sign of `self`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The number `x` to split.
Returns: ( ) A tuple containing ` `, or ` ` if `x` is `na` or non-finite.
method remainder(self, divisor)
remainder: Returns the IEEE 754 style remainder of `self` with respect to `divisor`. Result `r` satisfies `abs(r) <= 0.5 * abs(divisor)`. Uses round-half-to-even.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) Dividend `x`.
divisor (float) : (float) Divisor `y`. Cannot be zero or `na`.
Returns: (float) The IEEE 754 remainder, or `na` if divisor is 0, `na`, or inputs are non-finite in a way that prevents calculation.
method copysign(self, signSource)
copysign: Returns a float with the magnitude (absolute value) of `self` but the sign of `signSource`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) Value providing the magnitude `x`.
signSource (float) : (float) Value providing the sign `y`.
Returns: (float) `abs(x)` with the sign of `y`, or `na` if either input is `na`.
method frexp(self)
frexp: Returns the mantissa (m) and exponent (e) of this float `x` as ` `, such that `x = m * 2^e` and `0.5 <= abs(m) < 1` (unless `x` is 0).
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) The number `x` to decompose.
Returns: ( ) A tuple ` `, or ` ` if `x` is 0, or ` ` if `x` is non-finite or `na`.
method isclose(self, other, rel_tol, abs_tol)
isclose: Checks if this float `a` and `other` float `b` are close within relative and absolute tolerances.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) First value `a`.
other (float) : (float) Second value `b`.
rel_tol (simple float) : (simple float = 1e-9) Relative tolerance. Must be non-negative and less than 1.0.
abs_tol (simple float) : (simple float = 0.0) Absolute tolerance. Must be non-negative.
Returns: (bool) `true` if `abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)`. Handles `na`/`inf` appropriately. Returns `na` if tolerances are invalid.
method ldexp(self, exponent)
ldexp: Returns `self * (2**exponent)`. Inverse of `frexp`.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : (float) Mantissa part `x`.
exponent (int) : (int) Exponent part `i`.
Returns: (float) The result of `x * pow(2, i)`, or `na` if inputs are `na` or result is non-finite.
method gcd(self)
gcd: Calculates the Greatest Common Divisor (GCD) of all integers in this array.
Namespace types: array
Parameters:
self (array) : (array) An array of integers.
Returns: (int) The largest positive integer that divides all non-zero elements, 0 if all elements are 0 or array is empty. Returns `na` if any element is `na`.
method lcm(self)
lcm: Calculates the Least Common Multiple (LCM) of all integers in this array.
Namespace types: array
Parameters:
self (array) : (array) An array of integers.
Returns: (int) The smallest positive integer that is a multiple of all non-zero elements, 0 if any element is 0, 1 if array is empty. Returns `na` on potential overflow or if any element is `na`.
method dist(self, other)
dist: Returns the Euclidean distance between this point `p` and another point `q` (given as arrays of coordinates).
Namespace types: array
Parameters:
self (array) : (array) Coordinates of the first point `p`.
other (array) : (array) Coordinates of the second point `q`. Must have the same size as `p`.
Returns: (float) The Euclidean distance, or `na` if arrays have different sizes, are empty, or contain `na`/non-finite values.
method fsum(self)
fsum: Returns an accurate floating-point sum of values in this array. Uses built-in `array.sum()`. Note: Pine Script does not guarantee the same level of precision tracking as Python's `math.fsum`.
Namespace types: array
Parameters:
self (array) : (array) The array of floats to sum.
Returns: (float) The sum of the array elements. Returns 0.0 for an empty array. Returns `na` if any element is `na`.
method hypot(self)
hypot: Returns the Euclidean norm (distance from origin) for this point given by coordinates in the array. `sqrt(sum(x*x for x in coordinates))`.
Namespace types: array
Parameters:
self (array) : (array) Array of coordinates defining the point.
Returns: (float) The Euclidean norm, or 0.0 if the array is empty. Returns `na` if any element is `na` or non-finite.
method prod(self, start)
prod: Calculates the product of all elements in this array.
Namespace types: array
Parameters:
self (array) : (array) The array of values to multiply.
start (simple float) : (simple float = 1.0) The starting value for the product (returned if the array is empty).
Returns: (float) The product of array elements * start. Returns `na` if any element is `na`.
method sumprod(self, other)
sumprod: Returns the sum of products of values from this array `p` and another array `q` (dot product).
Namespace types: array
Parameters:
self (array) : (array) First array of values `p`.
other (array) : (array) Second array of values `q`. Must have the same size as `p`.
Returns: (float) The sum of `p * q ` for all i, or `na` if arrays have different sizes or contain `na`/non-finite values. Returns 0.0 for empty arrays.
Best SMA FinderThis script, Best SMA Finder, is a tool designed to identify the most robust simple moving average (SMA) length for a given chart, based on historical backtest performance. It evaluates hundreds of SMA values (from 10 to 1000) and selects the one that provides the best balance between profitability, consistency, and trade frequency.
What it does:
The script performs individual backtests for each SMA length using either "Long Only" or "Buy & Sell" logic, as selected by the user. For each tested SMA, it computes:
- Total number of trades
- Profit Factor (total profits / total losses)
- Win Rate
- A composite Robustness Score, which integrates Profit Factor, number of trades (log-scaled), and win rate.
Only SMA configurations that meet the user-defined minimum trade count are considered valid. Among all valid candidates, the script selects the SMA length with the highest robustness score and plots it on the chart.
How to use it:
- Choose the strategy type: "Long Only" or "Buy & Sell"
- Set the minimum trade count to filter out statistically irrelevant results
- Enable or disable the summary stats table (default: enabled)
The selected optimal SMA is plotted on the chart in blue. The optional table in the top-right corner shows the corresponding SMA length, trade count, Profit Factor, Win Rate, and Robustness Score for transparency.
Key Features:
- Exhaustive SMA optimization across 991 values
- Customizable trade direction and minimum trade filters
- In-chart visualization of results via table and plotted optimal SMA
- Uses a custom robustness formula to rank SMA lengths
Use cases:
Ideal for traders who want to backtest and auto-select a historically effective SMA without manual trial-and-error. Useful for swing and trend-following strategies across different timeframes.
📌 Limitations:
- Not a full trading strategy with position sizing or stop-loss logic
- Only one entry per direction at a time is allowed
- Designed for exploration and optimization, not as a ready-to-trade system
This script is open-source and built entirely from original code and logic. It does not replicate any closed-source script or reuse significant external open-source components.
Bitcoin Monthly Seasonality [Alpha Extract]The Bitcoin Monthly Seasonality indicator analyzes historical Bitcoin price performance across different months of the year, enabling traders to identify seasonal patterns and potential trading opportunities. This tool helps traders:
Visualize which months historically perform best and worst for Bitcoin.
Track average returns and win rates for each month of the year.
Identify seasonal patterns to enhance trading strategies.
Compare cumulative or individual monthly performance.
🔶 CALCULATION
The indicator processes historical Bitcoin price data to calculate monthly performance metrics
Monthly Return Calculation
Inputs:
Monthly open and close prices.
User-defined lookback period (1-15 years).
Return Types:
Percentage: (monthEndPrice / monthStartPrice - 1) × 100
Price: monthEndPrice - monthStartPrice
Statistical Measures
Monthly Averages: ◦ Average return for each month calculated from historical data.
Win Rate: ◦ Percentage of positive returns for each month.
Best/Worst Detection: ◦ Identifies months with highest and lowest average returns.
Cumulative Option
Standard View: Shows discrete monthly performance.
Cumulative View: Shows compounding effect of consecutive months.
Example Calculation (Pine Script):
monthReturn = returnType == "Percentage" ?
(monthEndPrice / monthStartPrice - 1) * 100 :
monthEndPrice - monthStartPrice
calcWinRate(arr) =>
winCount = 0
totalCount = array.size(arr)
if totalCount > 0
for i = 0 to totalCount - 1
if array.get(arr, i) > 0
winCount += 1
(winCount / totalCount) * 100
else
0.0
🔶 DETAILS
Visual Features
Monthly Performance Bars: ◦ Color-coded bars (teal for positive, red for negative returns). ◦ Special highlighting for best (yellow) and worst (fuchsia) months.
Optional Trend Line: ◦ Shows continuous performance across months.
Monthly Axis Labels: ◦ Clear month names for easy reference.
Statistics Table: ◦ Comprehensive view of monthly performance metrics. ◦ Color-coded rows based on performance.
Interpretation
Strong Positive Months: Historically bullish periods for Bitcoin.
Strong Negative Months: Historically bearish periods for Bitcoin.
Win Rate Analysis: Higher win rates indicate more consistently positive months.
Pattern Recognition: Identify recurring seasonal patterns across years.
Best/Worst Identification: Quickly spot the historically strongest and weakest months.
🔶 EXAMPLES
The indicator helps identify key seasonal patterns
Bullish Seasons: Visualize historically strong months where Bitcoin tends to perform well, allowing traders to align long positions with favorable seasonality.
Bearish Seasons: Identify historically weak months where Bitcoin tends to underperform, helping traders avoid unfavorable periods or consider short positions.
Seasonal Strategy Development: Create trading strategies that capitalize on recurring monthly patterns, such as entering positions in historically strong months and reducing exposure during weak months.
Year-to-Year Comparison: Assess how current year performance compares to historical seasonal patterns to identify anomalies or confirmation of trends.
🔶 SETTINGS
Customization Options
Lookback Period: Adjust the number of years (1-15) used for historical analysis.
Return Type: Choose between percentage returns or absolute price changes.
Cumulative Option: Toggle between discrete monthly performance or cumulative effect.
Visual Style Options: Bar Display: Enable/disable and customize colors for positive/negative bars, Line Display: Enable/disable and customize colors for trend line, Axes Display: Show/hide reference axes.
Visual Enhancement: Best/Worst Month Highlighting: Toggle special highlighting of extreme months, Custom highlight colors for best and worst performing months.
The Bitcoin Monthly Seasonality indicator provides traders with valuable insights into Bitcoin's historical performance patterns throughout the year, helping to identify potentially favorable and unfavorable trading periods based on seasonal tendencies.
Market Manipulation Index (MMI)The Composite Manipulation Index (CMI) is a structural integrity tool that quantifies how chaotic or orderly current market conditions are, with the aim of detecting potentially manipulated or unstable environments. It blends two distinct mathematical models that assess price behavior in terms of both structural rhythm and predictability.
1. Sine-Fit Deviation Model:
This component assumes that ideal, low-manipulation price behavior resembles a smooth oscillation, such as a sine wave. It generates a synthetic sine wave using a user-defined period and compares it to actual price movement over an adaptive window. The error between the real price and this synthetic wave—normalized by price variance—forms the Sine-Based Manipulation Index. A high error indicates deviation from natural rhythm, suggesting structural disorder.
2. Predictability-Based Model:
The second component estimates how well current price can be predicted using recent price lags. A two-variable rolling linear regression is computed between the current price and two lagged inputs (close and close ). If the predicted price diverges from the actual price, this error—also normalized by price variance—reflects unpredictability. High prediction error implies a more manipulated or erratic environment.
3. Adaptive Mechanism:
Both components are calculated using an adaptive smoothing window based on the Average True Range (ATR). This allows the indicator to respond proportionally to market volatility. During high volatility, the analysis window expands to avoid over-sensitivity; during calm periods, it contracts for better responsiveness.
4. Composite Output:
The two normalized metrics are averaged to form the final CMI value, which is then optionally smoothed further. The output is scaled between 0 and 1:
0 indicates a highly structured, orderly market.
1 indicates complete structural breakdown or randomness.
Suggested Interpretation:
CMI < 0.3: Market is clean and structured. Trend-following or breakout strategies may perform better.
CMI > 0.7: Market is structurally unstable. Choppy price action, fakeouts, or manipulative behavior may dominate.
CMI 0.3–0.7: Transitional zone. Caution or reduced risk may be warranted.
This indicator is designed to serve as a contextual filter, helping traders assess whether current market conditions are conducive to structured strategies, or if discretion and defense are more appropriate.
Dual-Phase Trend Regime Oscillator (Zeiierman)█ Overview
Trend Regime: Dual-Phase Oscillator (Zeiierman) is a volatility-sensitive trend classification tool that dynamically switches between two oscillators, one optimized for low volatility, the other for high volatility.
By analyzing standard deviation-based volatility states and applying correlation-derived oscillators, this indicator reveals not only whether the market is trending but also what kind of trend regime it is in —Bullish or Bearish —and how that regime reacts to market volatility.
█ Its Uniqueness
Most trend indicators assume a static market environment; they don't adjust their logic when the underlying volatility shifts. That often leads to false signals in choppy conditions or late entries in trending phases.
Trend Regime: Dual-Phase Oscillator solves this by introducing volatility-aware adaptability. It switches between a slow, stable oscillator in calm markets and a fast, reactive oscillator in volatile ones, ensuring the right sensitivity at the right time.
█ How It Works
⚪ Volatility State Engine
Calculates returns-based volatility using standard deviation of price change
Smooths the current volatility with a moving average
Builds a volatility history window and performs median clustering to determine typical "Low" and "High" volatility zones
Dynamically assigns the chart to one of two internal volatility regimes: Low or High
⚪ Dual Oscillators
In Low Volatility, it uses a Slow Trend Oscillator (longer lookback, smoother)
In High Volatility, it switches to a Fast Trend Oscillator (shorter lookback, responsive)
Both oscillators use price-time correlation as a measure of directional strength
The output is normalized between 0 and 1, allowing for consistent interpretation
⚪ Trend Regime Classification
The active oscillator is compared to a neutral threshold (0.5)
If above: Bullish Regime, if below: Bearish Regime, else: Neutral
The background and markers update to reflect regime changes visually
Triangle markers highlight bullish/bearish regime shifts
█ How to Use
⚪ Identify Current Trend Regime
Use the background color and chart table to immediately recognize whether the market is trending up or down.
⚪ Trade Regime Shifts
Use triangle markers (▲ / ▼) to spot fresh regime entries, which are ideal for confirming breakouts within trends.
⚪ Pullback Trading
Look for pullbacks when the trend is in a stable condition and the slow oscillator remains consistently near the upper or lower threshold. Watch for moments when the fast oscillator retraces back toward the midline, or slightly above/below it — this often signals a potential pullback entry in the direction of the prevailing trend.
█ Settings Explained
Length (Slow Trend Oscillator) – Used in calm conditions. Longer = smoother signals
Length (Fast Trend Oscillator) – Used in volatile conditions. Shorter = more responsive
Volatility Refit Interval – Controls how often the system recalculates Low/High volatility levels
Current Volatility Period – Lookback used for immediate volatility measurement
Volatility Smoothing Length – Applies an SMA to the raw volatility to reduce noise
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
Nasan Risk Score & Postion Size Estimator** THE RISK SCORE AND POSITION SIZE WILL ONLY BE CALCUTAED ON DIALY TIMEFRAME NOT IN OTHER TIMEFRAMES.
The typically accepted generic rule for risk management is not to risk more than 1% - 2 % of the capital in any given trade. It has its own basis however it does not take into account the stocks historic & current performance and does not consider the traders performance metrics (like win rate, profit ratio).
The Nasan Risk Score & Position size calculator takes into account all the listed parameters into account and estimates a Risk %. The position size is calculated using the estimated risk % , current ATR and a dynamically adjusted ATR multiple (ATR multiple is adjusted based on true range's volatility and stocks relative performance).
It follows a series of calculations:
Unadjusted Nasan Risk Score = (Min Risk)^a + b*
Min Risk = ( 5 year weighted avg Annual Stock Return - 5 year weighted avg Annual Bench Return) / 5 year weighted avg Annual Max ATR%
Max Risk = ( 5 year weighted avg Annual Stock Return - 5 year weighted avg Annual Bench Return) / 5 year weighted avg Annual Min ATR%
The min and max return is calculated based on stocks excess return in comparison to the Benchmark return and adjusted for volatility of the stock.
When a stock underperforms the benchmark, the default is, it does not calculate a position size , however if we opt it to calculate it will use 1% for Min Risk% and 2% for Max Risk% but all the other calculations and scaling remain the same.
Rationale:
Stocks outperforming their benchmark with lower volatility (ATR%) score higher.
A stock with high returns but excessive volatility gets penalized.
This ensures volatility-adjusted performance is emphasized rather than absolute returns.
Depending on the risk preference aggressive or conservative
Aggressive Risk Scaling: a = max (m, n) and b = min (m, n)
Conservative Scaling: a = min (m, n) and b = max (m, n)
where n = traders win % /100 and m = 1 - (1/ (1+ profit ratio))
A default of 50% is used for win factor and 1.5 for profit ratio.
Aggressive risk scaling increases exposure when the strategy's strongest factor is favorable.
Conservative risk scaling ensures more stable risk levels by focusing on the weaker factor.
The Unadjusted Nasan risk is score is further refined based on a tolerance factor which is based on the stocks maximum annual drawdown and the trader's maximum draw down tolerance.
Tolerance = /100
The correction factor (Tolerance) adjusts the risk score based on downside risk. Here's how it works conceptually:
The formula calculates how much the stock's actual drawdown exceeds your acceptable limit.
If stocks maximum Annual drawdown is smaller than Trader's maximum acceptable drawdown % , this results in a positive correction factor (indicating the drawdown is within your acceptable range and increases the unadjusted score.
If stocks maximum Annual drawdown exceeds Trader's maximum acceptable drawdown %, the correction factor will decrease (indicating that the downside risk is greater than what you are comfortable with, so it will adjust the risk exposure).
Once the Risk Score (numerically equal to Risk %) The position size is calculated based on the current market conditions.
Nasan Risk Score (Risk%) = Unadjusted Nasan Risk Score * Tolerance.
Position Size = (Capital * Risk% )/ ATR-Multiplier * ATR
The ATR Multiplier is dynamically adjusted based on the stocks recent relative performance and the variability of the true range itself. It would range between 1 - 3.5.
The multiplier widens when conditions are not favorable decreasing the position size and increases position size when conditions are favorable.
This Calculation /Estimate Does not give you a very different result than the arbitrary 1% - 2%. However it does fine tune the % based on sock performance, traders performance and tolerance level.
NIG Probability TableNormal-Inverse Gaussian Probability Table
This indicator implements the Normal-Inverse Gaussian (NIG) distribution to estimate the likelihood of future price based on recent market behavior.
📊 Key Features:
- Estimates the parameters (α: tail heaviness, β: skewness, δ: scale, μ: location)
of the NIG distribution using a sliding window over log returns.
- Uses a numerically approximated version of the modified Bessel function (K₁)
to calculate the NIG probability density function (PDF).
- Normalizes the total probability across all bins to ensure the values are interpretable.
- Displays a dynamic probability table showing the chance of future returns falling into each bin.
⚠️ Notes:
- This is a real-time approximation. The Bessel function and posterior inference are simplified.
- Tail probabilities and shape parameters are sensitive to the window size and input settings.
- Useful for risk analysis, option overlays, and strategy filters.
Log-Normal Price ForecastLog-Normal Price Forecast
This Pine Script creates a log-normal forecast model of future price movements on a TradingView chart, based on historical log returns. It plots expected price trajectories and bands representing different levels of statistical deviation.
Parameters
Model Length – Number of bars used to calculate average and standard deviation of log returns (default: 100).
Forecast Length – Number of bars into the future for which the forecast is projected (default: 100, max: 500).
Volatility SMA Length – The smoothing length for the standard deviation (default: 20).
Confidence Intervals – Confidence intervals for price bands (default: 95%, 99%, 99.9%).
Market Sessions & Viewer Panel [By MUQWISHI]▋ INTRODUCTION :
The “Market Sessions & Viewer Panel” is a clean and intuitive visual indicator tool that highlights up to four trading sessions directly on the chart. Each session is fully customizable with its name, session time, and color. It also generates a panel that provides a quick-glance summary of each session’s candle/bar shape, helping traders gain insight into the volatility across all trading sessions.
_______________________
▋ OVERVIEW:
_______________________
▋ CREDIT:
This indicator utilizes the “ Timezone — Library ”. A huge thanks to @n00btraders for effort and well-organized work.
_______________________
▋ SESSION PANEL:
The Session Panel allows traders to visually compare session volatility using a candlestick/bar pattern.
Each bar represents the price action during a session and includes the session status, session name, closing price, change(%) from open, and a tooltip that reveals detailed OHLC and volume when hovered over.
Chart Type:
It offers two styles Bar or Candle to display based on traders’ preference
Sorting:
Allowing to arrange session candles/bars based on…
—Left to Right: The most recently opened on the left, moving backward in time to the right.
—Right to Left: The most recently opened on the right, moving backward in time to the left.
—Default: Arrange sessions in the user-defined input order.
_______________________
▋ CHART VISUALIZATION:
The chart visualization highlights each trading session using color-coded backgrounds in two selectable drawing styles that span their respective active timeframes. Each session block provides session’s name, close price, and change from open.
Chart Type: Candle
Chart Type: Box
Extra Drawing Feature:
This feature may not exist in other indicators within the same category, it extends the session block drawing to the projected end of the session. This's done through estimation based on historical data; however, it doesn’t function fully on seconds-based timeframes due to drawing limitations.
_______________________
▋ INDICATOR SETTINGS:
Section(1): Sessions
(1) Universal Timezone.
(2) Each Session: Enable/Disable, Name, Color, and Time.
Section(2): Session Panel
(1) Show/Hide Session Panel.
(2) Chart Type: Candle/Bar.
(3) Bar’s Up/Down color.
(4) Width and Height of the bar.
(5) Location of Session Panel on chat.
(6) Sort: Left to Right (most recent session is placed on the left), Right to Left (most recent session is placed on the right), and Default (as input arrangement).
Section(3): Chart Visualization
(1) Show/Hide Chart Block Visualization.
(2) Draw Shape: Box/Candle.
(3) Border Style and Size.
(4) Label Styling includes location, size, and some essential selectable infos.
Please let me know if you have any questions
Elastic Volume-Weighted Student-T TensionOverview
The Elastic Volume-Weighted Student-T Tension Bands indicator dynamically adapts to market conditions using an advanced statistical model based on the Student-T distribution. Unlike traditional Bollinger Bands or Keltner Channels, this indicator leverages elastic volume-weighted averaging to compute real-time dispersion and location parameters, making it highly responsive to volatility changes while maintaining robustness against price fluctuations.
This methodology is inspired by incremental calculation techniques for weighted mean and variance, as outlined in the paper by Tony Finch:
📄 "Incremental Calculation of Weighted Mean and Variance" .
Key Features
✅ Adaptive Volatility Estimation – Uses an exponentially weighted Student-T model to dynamically adjust band width.
✅ Volume-Weighted Mean & Dispersion – Incorporates real-time volume weighting, ensuring a more accurate representation of market sentiment.
✅ High-Timeframe Volume Normalization – Provides an option to smooth volume impact by referencing a higher timeframe’s cumulative volume, reducing noise from high-variability bars.
✅ Customizable Tension Parameters – Configurable standard deviation multipliers (σ) allow for fine-tuned volatility sensitivity.
✅ %B-Like Oscillator for Relative Price Positioning – The main indicator is in form of a dedicated oscillator pane that normalizes price position within the sigma ranges, helping identify overbought/oversold conditions and potential momentum shifts.
✅ Robust Statistical Foundation – Utilizes kurtosis-based degree-of-freedom estimation, enhancing responsiveness across different market conditions.
How It Works
Volume-Weighted Elastic Mean (eμ) – Computes a dynamic mean price using an elastic weighted moving average approach, influenced by trade volume, if not volume detected in series, study takes true range as replacement.
Dispersion (eσ) via Student-T Distribution – Instead of assuming a fixed normal distribution, the bands adapt to heavy-tailed distributions using kurtosis-driven degrees of freedom.
Incremental Calculation of Variance – The indicator applies Tony Finch’s incremental method for computing weighted variance instead of arithmetic sum's of fixed bar window or arrays, improving efficiency and numerical stability.
Tension Calculation – There are 2 dispersion custom "zones" that are computed based on the weighted mean and dynamically adjusted standard student-t deviation.
%B-Like Oscillator Calculation – The oscillator normalizes the price within the band structure, with values between 0 and 1:
* 0.00 → Price is at the lower band (-2σ).
* 0.50 → Price is at the volume-weighted mean (eμ).
* 1.00 → Price is at the upper band (+2σ).
* Readings above 1.00 or below 0.00 suggest extreme movements or possible breakouts.
Recommended Usage
For scalping in lower timeframes, it is recommended to use the fixed α Decay Factor, it is in raw format for better control, but you can easily make a like of transformation to N-bar size window like in EMA-1 bar dividing 2 / decayFactor or like an RMA dividing 1 / decayFactor.
The HTF selector catch quite well Higher Time Frame analysis, for example using a Daily chart and using as HTF the 200-day timeframe, weekly or monthly.
Suitable for trend confirmation, breakout detection, and mean reversion plays.
The %B-like oscillator helps gauge momentum strength and detect divergences in price action if user prefer a clean chart without bands, this thanks to pineScript v6 force overlay feature.
Ideal for markets with volume-driven momentum shifts (e.g., futures, forex, crypto).
Customization Parameters
Fixed α Decay Factor – Controls the rate of volume weighting influence for an approximation EWMA approach instead of using sum of series or arrays, making the code lightweight & computing fast O(1).
HTF Volume Smoothing – Instead of a fixed denominator for computing α , a volume sum of the last 2 higher timeframe closed candles are used as denominator for our α weight factor. This is useful to review mayor trends like in daily, weekly, monthly.
Tension Multipliers (±σ) – Adjusts sensitivity to dispersion sigma parameter (volatility).
Oscillator Zone Fills – Visual cues for price positioning within the cloud range.
Posible Interpretations
As market within indicators relay on each individual edge, this are just some key ideas to glimpse how the indicator could be interpreted by the user:
📌 Price inside bands – Market is considered somehow "stable"; price is like resting from tension or "charging batteries" for volume spike moves.
📌 Price breaking outer bands – Potential breakout or extreme movement; watch for reversals or continuation from strong moves. Market is already in tension or generating it.
📌 Narrowing Bands – Decreasing volatility; expect contraction before expansion.
📌 Widening Bands – Increased volatility; prepare for high probability pull-back moves, specially to the center location of the bands (the mean) or the other side of them.
📌 Oscillator is just the interpretation of the price normalized across the Student-T distribution fitting "curve" using the location parameter, our Elastic Volume weighted mean (eμ) fixed at 0.5 value.
Final Thoughts
The Elastic Volume-Weighted Student-T Tension indicator provides a powerful, volume-sensitive alternative to traditional volatility bands. By integrating real-time volume analysis with an adaptive statistical model, incremental variance computation, in a relative price oscillator that can be overlayed in the chart as bands, it offers traders an edge in identifying momentum shifts, trend strength, and breakout potential. Think of the distribution as a relative "tension" rubber band in which price never leave so far alone.
DISCLAIMER:
The Following indicator/code IS NOT intended to be a formal investment advice or recommendation by the author, nor should be construed as such. Users will be fully responsible by their use regarding their own trading vehicles/assets.
The following indicator was made for NON LUCRATIVE ACTIVITIES and must remain as is, following TradingView's regulations. Use of indicator and their code are published for work and knowledge sharing. All access granted over it, their use, copy or re-use should mention authorship(s) and origin(s).
WARNING NOTICE!
THE INCLUDED FUNCTION MUST BE CONSIDERED FOR TESTING. The models included in the indicator have been taken from open sources on the web and some of them has been modified by the author, problems could occur at diverse data sceneries, compiler version, or any other externality.
Dynamic RSI Regression Bands (Zeiierman)█ Overview
The Dynamic RSI Regression Bands (Zeiierman) is a regression channel tool that dynamically resets based on RSI overbought and oversold conditions. It adapts to trend shifts in real time, creating a highly responsive regression framework that visualizes market sentiment and directional momentum with every RSI-triggered event.
Unlike static regression models, this indicator recalibrates its slope and deviation bands only after the RSI crosses predefined thresholds, helping traders pinpoint new phases of momentum, exhaustion, or reversal.
You’re not just measuring the trend — you’re tracking when and where the trend deserves to be re-evaluated.
█ The Assumption:
"A major momentum shift (RSI crossing OB/OS) signals a potential regime change, and thus, the trend model should be recalibrated from that point."
Instead of using a fixed-length regression (which assumes trend relevance over a static window), this script resets the regression calculation every time RSI crosses into extreme territory. The underlying idea is that extreme RSI levels often represent emotional peaks in market behavior and are statistically likely to be followed by a new price structure.
█ How It Works
⚪ RSI-Based Channel Reset
RSI is monitored continuously
If RSI crosses above the Overbought level, the indicator resets and starts a new regression channel
If RSI crosses below the Oversold level, the same reset logic applies
These events act as “anchor points” for dynamic trend analysis
⚪ Regression Channel Logic
A custom linear regression is calculated from the RSI reset point forward
The lookback grows with each bar after the reset, up to a user-defined max
Regression lines are drawn from the reset point to the current bar
⚪ Standard Deviation Bands
Upper and lower bands are plotted around the regression line using the standard deviation
These serve as dynamic volatility envelopes, great for spotting breakouts or reversals
⚪ Rejection Markers
If price hits the upper/lower band and then closes back inside it, a rejection marker is plotted
Helps visualize failed breakouts and areas of absorption or reversal pressure
█ How to Use
⚪ Detect Trend Shifts
Use the RSI resets to identify when the trend might be starting fresh.
⚪ Watch the Bands for Volatility Extremes
Use the outer bands as soft areas of potential reversal or momentum breakout.
⚪ Spot Rejections for Potential Entry Signals
If price moves outside a band but then quickly returns inside, it often means the breakout failed, and price may reverse.
█ Settings Explained
RSI Length – How many bars RSI uses. Shorter = faster.
OB / OS Levels – Crossing these triggers a regression reset.
Base Regression Length – Max number of bars regression can use post-reset.
StdDev Multiplier – Controls band width from the regression line.
Min Bars After Reset – Ensures channel doesn’t form immediately; waits for structure.
Show Reset Markers – Triangles mark where RSI crossed OB/OS.
Show Rejection Markers – Circles mark where the price rejected the channel edge.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.