Frequency and Volume ProfileFREQUENCY & VOLUME PROFILE
⚪ OVERVIEW
The Frequency and Volume Profile indicator plots a frequency or volume profile based on the visible bars on the chart, providing insights into price levels with significant trading activity.
⚪ USAGE
● Market Structure Analysis:
Identify key price levels where significant trading activity occurred, which can act as support and resistance zones.
● Volume Analysis:
Use the volume mode to understand where the highest trading volumes have occurred, helping to confirm strong price levels.
● Trend Confirmation:
Analyze the distribution of trading activity to confirm or refute trends, mark important levels as support and resistance, aiding in making more informed trading decisions.
● Frequency Distribution:
In statistics, a frequency distribution is a list of the values that a variable takes in a sample. It is usually a list. Displayed as a histogram.
⚪ SETTINGS
Source: Select the price data to use for the profile calculation (default: hl2).
Move Profile: Set the number of bars to offset the profile from the current bar (default: 100).
Mode: Choose between "Frequency" and "Volume" for the profile calculation.
Profile Color: Customize the color of the profile lines.
Lookback Period: Uses 5000 bars for daily and higher timeframes, otherwise 10000 bars.
The Frequency Profile indicator is a powerful tool for visualizing price levels with significant trading activity, whether in terms of frequency or volume. Its dynamic calculation and customizable settings make it a versatile addition to any trading strategy.
Statistics
Multi Asset Histogram [ChartPrime]Multi Asset Histogram Indicator
Overview:
The "Multi Asset Histogram" indicator provides a comprehensive visualization of the performance of multiple assets relative to each other. By calculating a score for each asset and displaying it in a histogram format, this indicator helps traders quickly identify the trends, dominant asset and the average performance of the assets in the selected group.
Key Features:
◆ Multi-Asset Score Calculation:
The indicator calculates a trend score for each selected asset based on the price source (e.g., hl2).
The trend score is determined by comparing the current price to the prices over the past bars back defined by user, adding or subtracting points based on whether the current price is higher or lower than previous prices.
// Score Function
trscore(src) =>
total = 0.0
for i = 1 to 50
total += (src >= nz(src ) ? 1 : -1)
total
◆ Flexible Symbol Input:
Traders can input up to 10 different symbols (e.g., BTCUSD, ETHUSD, etc.) to be included in the histogram analysis.
◆ Dynamic Visualization:
A histogram is plotted for each asset, with bars colored based on the score, providing a clear visual representation of the relative performance.
Color gradients from red to aqua indicate the performance, with red representing negative scores and aqua representing positive scores.
◆ Adaptive Histogram Lines:
The width and placement of histogram lines adapt based on the calculated scores, ensuring clear visualization regardless of the values.
Dashed lines represent the mean score of all assets, helping traders identify the overall market trend.
◆Detailed Labels and Values:
Labels are placed on the histogram to display the exact score for each asset.
Mean value and zero line labels provide additional context for the overall performance.
◆ Visual Scaling Lines:
Zero line and mean line are clearly marked, helping traders understand the distribution and scale of scores.
Scales on the left and right of the histogram indicate the performance range.
◆ Informative Table:
A table is displayed on the chart, showing the dominant asset (the one with the highest score) and the mean score of all assets.
The table updates dynamically to reflect real-time changes in asset performance.
◆ Settings:
Length: The value of number bars back is greater or less than the current value of the source
Source: The price source to be used for score calculation (e.g., hl2).
Symbols: Up to 10 different asset symbols can be input for analysis.
Usage Notes:
This indicator is useful for traders who monitor multiple assets simultaneously and need a quick visual reference to identify the strongest and weakest performers.
The color coding and dynamic labels make it easy to interpret the relative performance and make informed trading decisions.
This indicator is designed to enhance multi-asset analysis by providing a clear, visual representation of each asset's performance relative to the others, making it easier to identify trends and dominant assets in the market.
Asset Drawdown & Drawdown HeatMap [InvestorUnknown]Overview
The "Asset Drawdown & Drawdown HeatMap" indicator is designed for educational purposes to help users visualize and analyze the drawdowns of various assets. It highlights both recent and historical drawdowns, offering valuable insights into the performance and risk of different investments. Additionally, it can serve as a complementary analysis tool for trading and investing decisions.
Features
Drawdown Calculation:
Computes the drawdown from the highest value (ATH) to the current value, showing the percentage decline.
Displays both the current drawdown and the maximum historical drawdown for the selected assets.
HeatMap Visualization:
Uses a gradient color scheme to represent the magnitude of drawdowns over a specified lookback period.
Helps identify periods of significant decline and recovery visually.
Multiple Assets:
Supports up to 10 different assets (adding more would make it harder to see the drawdowns of different assets), allowing users to compare drawdowns across various symbols.
Each asset can be individually plotted and color-coded for clarity.
Customizable Settings:
User inputs for high and low value calculations, color preferences, and lookback periods.
Option to color bars based on the drawdown heatmap.
Detailed Functionality
Drawdown Calculation:
The DD() function calculates the current drawdown and the maximum historical drawdown based on the high and low values.
The drawdown is calculated as 100 - (lowvalue / ATH * 100), where ATH is the highest value observed so far.
// - - - - - Custom Function - - - - - //{
DD() =>
ATH = highvalue
ATH := na(ATH ) ? highvalue : math.max(highvalue, ATH )
Drawdown = 100 - lowvalue / ATH * 100
MaxDrawdown = Drawdown
MaxDrawdown := na(MaxDrawdown ) ? Drawdown : math.max(Drawdown, MaxDrawdown )
//}
Security Request:
Uses the request.security() function to fetch drawdown data for each specified asset on a daily timeframe.
Computes both current drawdown (TnDD) and maximum drawdown (TnMDD) for each asset.
// - - - - - Create Variables - - - - - //{
= request.security("", "1D", DD()) // Chart
= request.security(t1, "1D", DD())
= request.security(t2, "1D", DD())
= request.security(t3, "1D", DD())
= request.security(t4, "1D", DD())
= request.security(t5, "1D", DD())
= request.security(t6, "1D", DD())
= request.security(t7, "1D", DD())
= request.security(t8, "1D", DD())
= request.security(t9, "1D", DD())
= request.security(t10, "1D", DD())
//}
Plotting:
Plots the drawdown values for each asset on the chart, with the option to enable or disable plotting for individual assets.
Colors the plotted lines and labels based on user-specified preferences.
HeatMap:
Creates a heatmap color gradient based on the drawdown values over the lookback period.
Colors the bars on the chart according to the heatmap to visualize drawdown severity over time.
// - - - - - HeatMap - - - - - //{
heatcol = color.from_gradient(T0DD, ta.lowest(T0DD,lookback), ta.highest(T0DD,lookback), topcol, botcol)
barcolor(colbars ? heatcol : na)
//}
Labels:
Displays labels for each asset's drawdown value at the end of the chart for quick reference.
This indicator is an excellent tool for educational purposes, helping users understand drawdown dynamics and their implications on asset performance. It also provides a visual aid for monitoring and comparing drawdowns across multiple assets, which can be beneficial for making informed trading and investment decisions.
Moving Average Cross Probability [AlgoAlpha]Moving Average Cross Probability 📈✨
The Moving Average Cross Probability by AlgoAlpha calculates the probability of a cross-over or cross-under between the fast and slow values of a user defined Moving Average type before it happens, allowing users to benefit by front running the market.
✨ Key Features:
📊 Probability Histogram: Displays the Probability of MA cross in the form of a histogram.
🔄 Data Table: Displays forecast information for quick analysis.
🎨 Customizable MAs: Choose from various moving averages and customize their length.
🚀 How to Use:
🛠 Add Indicator: Add the indicator to favorites, and customize the settings to suite your trading style.
📊 Analyze Market: Watch the indicator to look for trend shifts early or for trend continuations.
🔔 Set Alerts: Get notified of bullish/bearish points.
✨ How It Works:
The Moving Average Cross Probability Indicator by AlgoAlpha determines the probability by looking at a probable range of values that the price can take in the next bar and finds out what percentage of those possibilities result in the user defined moving average crossing each other. This is done by first using the HMA to predict what the next price value will be, a standard deviation based range is then calculated. The range is divided by the user defined resolution and is split into multiple levels, each of these levels represent a possible value for price in the next bar. These possible predicted values are used to calculate the possible MA values for both the fast and slow MAs that may occur in the next bar and are then compared to see how many of those possible MA results end up crossing each other.
Stay ahead of the market with the Moving Average Cross Probability Indicator AlgoAlpha! 📈💡
Exponential Grid [Phi, Pi, Euler]If you disagree with one of the EMH principles that price is too random, then by definition you must agree that historic price has deterministic function to a scenario ahead.
I personally believe that constants like phi, pi and e can mimic exponential growth of the price.
In this script, first grid is based on the Lowest price multiplied with self fraction of the constant.
For example:
If you are familiar with fib ratio 1.272, then you must know that it is 1.618 to the power of 0.5.
With default settings of exponent step 0.25
First grid = Lowest price x phi^0.25
Second grid = Lowest price x phi^0.25x2
Third grid = Lowest price x phi^0.25x3 and so on
The script will automatically find the lowest price and update the grid values.
Or you can set up your custom Lowest price manually if you feel like the All Time Low level loses its relevance value after long period.
There are 64 grids including Lowest price level. And it wasn't by a chance. Pine Script has a limitation of max 64 plots. Number of grids shown in the chart depends on the highest price. Once price breaks above ATH a couple of next grids will be plotted automatically. In most cases if everything is plotted, the chart appears squeezed and you'll need to zoom in to see it. Therefore, I adjusted it relatively to the scale of the chart for the comfort.
In some cases 64 plots aren't enough to cover the whole chart. For example, let's take a look at NVIDIA chart:
Since the price has started with 0.0333, it is way too small to cover all with default settings.
We are left with 2 choices:
Either Enable "Round"
OR increase Exponent Step (from 0.25 to 0.5 in the particular example below)
If you set constant to pi or e which is a bigger number than phi, expect the gaps to be bigger. To reduce it to a more gradual way of expansion you can decrease Exponent Step.
Volume, Volatility, and Momentum Metrics IndicatorVolume, Volatility, and Momentum Metrics Indicator
Welcome to our comprehensive TradingView indicator designed to provide traders with essential volume, volatility, and momentum metrics. This powerful tool is ideal for traders looking to enhance their market analysis by visualizing key indicators in a concise and easy-to-read format.
Key Features
1. Volume Metrics:
• Daily Dollar Volume: Understand the monetary value of the traded volume each day.
• Relative Volume (RVOL) Day: Compare the current volume to the previous day’s volume to gauge trading activity.
• Relative Volume (RVOL) 30D: Assess the average trading volume over the past 30 days.
• Relative Volume (RVOL) 90D: Evaluate the average trading volume over the past 90 days.
2. Volatility and Momentum Metrics:
• Average Daily Range (ADR) %: Measure the average daily price range as a percentage of the current price.
• Average True Range (ATR): Track the volatility by calculating the average true range over a specified period.
• Relative Strength Index (RSI): Determine the momentum by analyzing the speed and change of price movements.
3. Customizable Table Positions:
• Place the volume metrics table and the volatility/momentum metrics table in the bottom-left or bottom-right corners of your chart for optimal visibility and convenience.
Why Use This Indicator?
• Enhanced Market Analysis: Quickly assess volume trends, volatility, and momentum to make informed trading decisions.
• User-Friendly Interface: The clear and concise tables provide at-a-glance information without cluttering your chart.
• Customization Options: Choose where to display the tables to suit your trading style and preferences.
How It Works
This indicator uses advanced calculations to provide real-time data on trading volume, price range, and momentum. By displaying this information in separate, neatly organized tables, traders can easily monitor these critical metrics without diverting their focus from the main chart.
Who Can Benefit?
• Day Traders: Quickly gauge intraday trading activity and volatility.
• Swing Traders: Analyze longer-term volume trends and momentum to identify potential trade setups.
• Technical Analysts: Utilize comprehensive metrics to enhance technical analysis and trading strategies.
Get Started
To add this powerful indicator to your TradingView chart, simply search for “Volume, Volatility, and Momentum Metrics” in the TradingView public library, or use the provided link to add it directly to your chart. Enhance your trading analysis and make more informed decisions with our comprehensive TradingView indicator.
Price & Moving Average + Financial IndicatorThis indicator displays:
Moving Average that can be set into SMA or EMA: Default setting is SMA 50
Label price for today's MA
Basic Financial Data:
Type of Sector
Type of Industry
P/E Ratio
Price to Book Ratio
ROE
Revenue (FQ)
Earnings (FQ)
Once again, I let the script open for you guys to custom it based on your own preferences. Hope you guys enjoy it!
Bitcoin Fundamentals - Bitcoin Block RewardThe Bitcoin Block Reward is the batch of new Bitcoins generated by the miners after solving each block.
The Block Reward is set as a basic rule and cannot be changed without agreement between the entire Bitcoin network. It started at 50 BTC during the first period. Afterwards the Block Reward gets adjusted to half of it value (Halving Event) on each cycle of 210000 blocks mined.
This is the only way that new bitcoins are created. It creates an incentive for miners to secure the network.
Over time the Block Reward will decreases to a value that might not cover the mining costs. At that point, the use of the Bitcoin Network might have increased sufficiently as to generate enough transaction fees to cover the mining costs.
MOTIVATION
Even though this is a very simple indicator, I'm currently missing a data source to compute the Block Reward value within Tradingview. Therefore, I created this indicator and its associated library function to enable its visualization and (eventually) for coders to make use of the source function to power more elaborate scripts related to the Halving Events.
Hope that helps!
CryptoLibrary "Crypto"
This Library includes functions related to crytocurrencies and their blockchain
btcBlockReward(t)
Delivers the BTC block reward for a specific date/time
Parameters:
t (int) : Time of the current candle
Returns: blockRewardBtc
Bearish vs Bullish ArgumentsThe Bearish vs Bullish Arguments Indicator is a tool designed to help traders visually assess and compare the number of bullish and bearish arguments based on their custom inputs. This script enables users to input up to five bullish and five bearish arguments, dynamically displaying the bias on a clean and customizable table on the chart. This provides traders with a clear, visual representation of the market sentiment they have identified.
Key Features:
Customizable Inputs: Users can input up to five bullish and five bearish arguments, which are displayed in a table on the chart.
Bias Calculation: The script calculates the bias (Bullish, Bearish, or Neutral) based on the number of bullish and bearish arguments provided.
Color Customization: Users can customize the colors for the table background, text, and headers, ensuring the table fits seamlessly into their charting environment.
Reset Functionality: A reset switch allows users to clear all input arguments with a single click, making it easy to start fresh.
How It Works:
Input Fields: The script provides input fields for up to five bullish and five bearish arguments. Each input is a simple text field where users can describe their arguments.
Bias Calculation: The script counts the number of non-empty bullish and bearish arguments and determines the overall bias. The bias is displayed in the table with a dynamically changing color to indicate whether the market sentiment is bullish, bearish, or neutral.
Customizable Table: The table is positioned on the chart according to the user's preference (top-left, top-right, bottom-left, bottom-right) and can be customized in terms of background color and text color.
How to Use:
Add the Indicator: Add the Bearish vs Bullish Arguments Indicator to your chart.
Input Arguments: Enter up to five bullish and five bearish arguments in the provided input fields in the script settings.
Customize Appearance: Adjust the table's background color, text color, and position on the chart to fit your preferences.
Example Use Case:
A trader might use this indicator to visually balance their arguments for and against a particular trade setup. By entering their reasons for a bullish outlook in the bullish argument fields and their reasons for a bearish outlook in the bearish argument fields, they can quickly see which side has more supporting points and make a more informed trading decision.
This script was inspired by Arjoio's concepts
Depth of Market (DOM) [LuxAlgo]The Depth Of Market (DOM) tool allows traders to look under the hood of any market, taking price and volume analysis to the next level. The following features are included: DOM, Time & Sales, Volume Profile, Depth of Market, Imbalances, Buying Pressure, and up to 24 key intraday levels (it really packs a punch).
As a disclaimer, this tool does not use tick data, it is a DOM reconstruction from the provided real-time time series data (price and volume). So the volume you see is from filled orders only, this tool does not show unfilled limit orders.
Traders can enable or disable any of the features at will to avoid being overwhelmed with too much information and to make the tool perform faster.
The features that have the biggest impact on performance are Historical Data Collection, Key Levels (POC & VWAP), Time & Sales, Profile, and Imbalances. Disable these features to improve the indicator computational performance.
🔶 DOM
This is the simplest form of the tool, a simple DOM or ladder that displays the following columns:
PRICE: Price level
BID: Total number of market sell orders filled or limit buy orders filled.
SELL: Sell market orders
BUY: Buy market orders
ASK: Total number of market buy orders filled or limit sell orders filled.
The DOM only collects historical data from the last 24 hours and real-time data.
Traders can select a reset period for the DOM with two options:
DAILY: Resets at the beginning of each trading day
SESSIONS: Resets twice, as DAILY and 15.5 hours later, to coincide with the start of the RTH session for US tickers.
The DOM has two main modes, it can display price levels as ticks or points. The default is automatic based on the current daily volatility, but traders can manually force one mode or the other if they wish.
For convenience, traders have the option to set the number of lines (price levels), and the size of the text and to display only real-time data.
By default, the top price is set to 0 so that the DOM automatically adjusts the price levels to be displayed, but traders can set the top price manually so that the tool displays only the desired price levels in a fixed manner.
🔹 Volume Profile
As additional features to the basic DOM, traders have access to the volume profile histogram and the total volume per price level.
This helps traders identify at a glance key price areas where volume is accumulating (high volume nodes) or areas where volume is lacking (low volume nodes) - these areas are important to some traders who base their decision-making process on them.
🔹 Imbalances
Other added features are imbalances and buying pressure:
Interlevel Imbalance: volume delta between two different price levels
Intralevel Imbalance: delta between buy and sell volume at the same price level
Buying Pressure Percent: percentage of buy volume compared to total volume
Imbalances can help traders identify areas of interest in the price for possible support or resistance.
🔹 Depth
Depth allows traders to see at a glance how much supply is above the current price level or how much demand is below the current price level.
Above the current price level shows the cumulative ask volume (filled sell limit orders) and below the current price level shows the cumulative bid volume (filled buy limit orders).
🔶 KEY LEVELS
The tool includes up to 24 different key intraday levels of particular relevance:
Previous Week Levels
PWH: Previous week high
PWL: Previous week low
PWM: Previous week middle
PWS: Previous week settlement (close)
Previous Day Levels
PDH: Previous day high
PDL: Previous day low
PDM: Previous day middle
PDS: Previous day settlement (close)
Current Day Levels
OPEN: Open of day (or session)
HOD: High of day (or session)
LOD: Low of day (or session)
MOD: Middle of day (or session)
Opening Range
ORH: Open range high
ORL: Open range low
Initial Balance
IBH: Initial balance high
IBL: Initial balance low
VWAP
+3SD: Volume weighted average price plus 3 standard deviations
+2SD: Volume weighted average price plus 2 standard deviations
+1SD: Volume weighted average price plus 1 standard deviation
VWAP: Volume weighted average price
-1SD: Volume weighted average price minus 1 standard deviation
-2SD: Volume weighted average price minus 2 standard deviations
-3SD: Volume weighted average price minus 3 standard deviations
POC: Point of control
Different traders look at different levels, the key levels shown here are objective and specific areas of interest that traders can act on, providing us with potential areas of support or resistance in the price.
🔶 TIME & SALES
The tool also features a full-time and sales panel with time, price, and size columns, a size filter, and the ability to set the timezone to display time in the trader's local time.
The information shown here is what feeds the DOM and it can be useful in several ways, for example in detecting absorption. If a large number of orders are coming into the market but the price is barely moving, this indicates that there is enough liquidity at these levels to absorb all these orders, so if these orders stop coming into the market, the price may turn around.
🔶 SETTINGS
Period: Select the anchoring period to start data collection, DAILY will anchor at the start of the trading day, and SESSIONS will start as DAILY and 15.5 hours later (RTH for US tickers).
Mode: Select between AUTO and MANUAL modes for displaying TICKS or POINTS, in AUTO mode the tool will automatically select TICKS for tickers with a daily average volatility below 5000 ticks and POINTS for the rest of the tickers.
Rows: Select the number of price levels to display
Text Size: Select the text size
🔹 DOM
DOM: Enable/Disable DOM display
Realtime only: Enable/Disable real-time data only, historical data will be collected if disabled
Top Price: Specify the price to be displayed on the top row, set to 0 to enable dynamic DOM
Max updates: Specify how many times the values on the SELL and BUY columns are accumulated until reset.
Profile/Depth size: Maximum size of the histograms on the PROFILE and DEPTH columns.
Profile: Enable/Disable Profile column. High impact on performance.
Volume: Enable/Disable Volume column. Total volume traded at price level.
Interlevel Imbalance: Enable/Disable Interlevel Imbalance column. Total volume delta between the current price level and the price level above. High impact on performance.
Depth: Enable/Disable Depth, showing the cumulative supply above the current price and the cumulative demand below. Impact on performance.
Intralevel Imbalance: Enable/Disable Intralevel Imbalance column. Delta between total buy volume and total sell volume. High impact on performance.
Buying Pressure Percent: Enable/Disable Buy Percent column. Percentage of total buy volume compared to total volume.
Imbalance Threshold %: Threshold for highlighting imbalances. Set to 90 to highlight the top 10% of interlevel imbalances and the top and bottom 10% of intra-level imbalances.
Crypto volume precision: Specify the number of decimals to display on the volume of crypto assets
🔹 Key Levels
Key Levels: Enable/Disable KEY column. Very high performance impact.
Previous Week: Enable/Disable High, Low, Middle, and Close of the previous trading week.
Previous Day: Enable/Disable High, Low, Middle, and Settlement of the previous trading day.
Current Day/Session: Enable/Disable Open, High, Low and Middle of the current period.
Open Range: Enable/Disable High and Low of the first candle of the period.
Initial Balance: Enable/Disable High and Low of the first hour of the period.
VWAP: Enable/Disable Volume-weighted average price of the period with 1, 2, and 3 standard deviations.
POC: Enable/Disable Point of Control (price level with the highest volume traded) of the period.
🔹 Time & Sales
Time & Sales: Enable/Disable time and sales panel.
Timezone offset (hours): Enter your time zone\'s offset (+ or −), including a decimal fraction if needed.
Order Size: Set order size filter. Orders smaller than the value are not displayed.
🔶 THANKS
Hi, I'm makit0 coder of this tool and proud member of the LuxAlgo Opensource team, it's an honor to be part of the LuxAlgo family doing something I love as it's writing opensource code and sharing it with the world. I'd like to thank all of you who use, comment on, and vote for all of our open-source tools, and all of you who give us your support.
And of course thanks to the PineCoders family for all the work in front of and behind the scenes that makes the PineScript community what it is, simply the best.
Peace, Love & PineScript!
PFCF Price BandPFCF Price Band shows price calculated using the previous period's high and low P/TTM FCFPS (TTM's price to free cash flow per share over the last 12 months) multiplied by TTM's current FCFPS ( Similar to price theory = P/E x expected earnings per share)
If the current P/FCFPS is lower than the minimum P/FCFPS, it is considered cheap. In other words, above the maximum P/FCFPS is considered expensive.
PFCF Price Band consists of 2 parts.
- Firstly, the historical P/FCFPS value in "Green" (if TTM FCFPS is positive) or "Red" (if TTM FCFPS is negative) status changes based on the latest high or low price of TTM FCFPS.
- Second, the blue line is the closing price divided by TTM FCFPS, which shows the current P/FCF.
P.S. It is recommended to use it together with the PE Band indicator because just net profit does not mean that a company has good cash flow.
ΔYoY(Economics)Year over year indicator which will benchmark the most recent data vs 1 year lookback; Will automate the lookback for quarterly and monthly data based on timeframe selected (3M for quarterly, 1M for monthly). Tradingview will aggregate weekly data into a monthly data point. SMA applied to get the average over some x period.
Ln(close)Natural log indicator for normalizing data. SMA applied so you can take the average of that normalization factor. Personally use it for US economic data where the value is very large (GDI, Fed Balance Sheet, USM2 etc.) and the year over year delta is not pertinent (USM2) or not available (GDI.. although I did make an indicator to get YoY :D). Any additional ideas leave a comment and I'll take a look.
MathTransformLibrary "MathTransform"
Auxiliary functions for transforming data using mathematical and statistical methods
scaler_zscore(x, lookback_window)
Calculates Z-Score normalization of a series.
Parameters:
x (float) : : floating point series to normalize
lookback_window (int) : : lookback period for calculating mean and standard deviation
Returns: Z-Score normalized series
scaler_min_max(x, lookback_window, min_val, max_val, empiric_min, empiric_max, empiric_mid)
Performs Min-Max scaling of a series within a given window, user-defined bounds, and optional midpoint
Parameters:
x (float) : : floating point series to transform
lookback_window (int) : : int : optional lookback window size to consider for scaling.
min_val (float) : : float : minimum value of the scaled range. Default is 0.0.
max_val (float) : : float : maximum value of the scaled range. Default is 1.0.
empiric_min (float) : : float : user-defined minimum value of the input data. This means that the output could exceed the `min_val` bound if there is data in `x` lesser than `empiric_min`. If na, it's calculated from `x` and `lookback_window`.
empiric_max (float) : : float : user-defined maximum value of the input data. This means that the output could exceed the `max_val` bound if there is data in `x` greater than `empiric_max`. If na, it's calculated from `x` and `lookback_window`.
empiric_mid (float) : : float : user-defined midpoint value of the input data. If na, it's calculated from `empiric_min` and `empiric_max`.
Returns: rescaled series
log(x, base)
Applies logarithmic transformation to a value, base can be user-defined.
Parameters:
x (float) : : floating point value to transform
base (float) : : logarithmic base, must be greater than 0
Returns: logarithm of the value to the given base, if x <= 0, returns logarithm of 1 to the given base
exp(x, base)
Applies exponential transformation to a value, base can be user-defined.
Parameters:
x (float) : : floating point value to transform
base (float) : : base of the exponentiation, must be greater than 0
Returns: the result of raising the base to the power of the value
power(x, exponent)
Applies power transformation to a value, exponent can be user-defined.
Parameters:
x (float) : : floating point value to transform
exponent (float) : : exponent for the transformation
Returns: the value raised to the given exponent, preserving the sign of the original value
tanh(x, scale)
The hyperbolic tangent is the ratio of the hyperbolic sine and hyperbolic cosine. It limits an output to a range of −1 to 1.
Parameters:
x (float) : : floating point series
scale (float)
sigmoid(x, scale, offset)
Applies the sigmoid function to a series.
Parameters:
x (float) : : floating point series to transform
scale (float) : : scaling factor for the sigmoid function
offset (float) : : offset for the sigmoid function
Returns: transformed series using the sigmoid function
sigmoid_double(x, scale, offset)
Applies a double sigmoid function to a series, handling positive and negative values differently.
Parameters:
x (float) : : floating point series to transform
scale (float) : : scaling factor for the sigmoid function
offset (float) : : offset for the sigmoid function
Returns: transformed series using the double sigmoid function
logistic_decay(a, b, c, t)
Calculates logistic decay based on given parameters.
Parameters:
a (float) : : parameter affecting the steepness of the curve
b (float) : : parameter affecting the direction of the decay
c (float) : : the upper bound of the function's output
t (float) : : time variable
Returns: value of the logistic decay function at time t
Sharpe and Sortino Ratios█ OVERVIEW
This indicator calculates the Sharpe and Sortino ratios using a chart symbol's periodic price returns, offering insights into the symbol's risk-adjusted performance. It features the option to calculate these ratios by comparing the periodic returns to a fixed annual rate of return or the returns from another selected symbol's context.
█ CONCEPTS
Returns, risk, and volatility
The return on an investment is the relative gain or loss over a period, often expressed as a percentage. Investment returns can originate from several sources, including capital gains, dividends, and interest income. Many investors seek the highest returns possible in the quest for profit. However, prudent investing and trading entails evaluating such returns against the associated risks (i.e., the uncertainty of returns and the potential for financial losses) for a clearer perspective on overall performance and sustainability.
The profitability of an investment typically comes at the cost of enduring market swings, noise, and general uncertainty. To navigate these turbulent waters, investors and portfolio managers often utilize volatility , a measure of the statistical dispersion of historical returns, as a foundational element in their risk assessments because it provides a tangible way to gauge the uncertainty in returns. High volatility suggests increased uncertainty and, consequently, higher risk, whereas low volatility suggests more stable returns with minimal fluctuations, implying lower risk. These concepts are integral components in several risk-adjusted performance metrics, including the Sharpe and Sortino ratios calculated by this indicator.
Risk-free rate
The risk-free rate represents the rate of return on a hypothetical investment carrying no risk of financial loss. This theoretical rate provides a benchmark for comparing the returns on a risky investment and evaluating whether its excess returns justify the risks. If an investment's returns are at or below the theoretical risk-free rate or the risk premium is below a desired amount, it may suggest that the returns do not compensate for the extra risk, which might be a call to reassess the investment.
Since the risk-free rate is a theoretical concept, investors often utilize proxies for the rate in practice, such as Treasury bills and other government bonds. Conventionally, analysts consider such instruments "risk-free" for a domestic holder, as they are a form of government obligation with a low perceived likelihood of default.
The average yield on short-term Treasury bills, influenced by economic conditions, monetary policies, and inflation expectations, has historically hovered around 2-3% over the long term. This range also aligns with central banks' inflation targets. As such, one may interpret a value within this range as a minimum proxy for the risk-free rate, as it may correspond to the minimum rate required to maintain purchasing power over time. This indicator uses a default value of 2% as the risk-free rate in its Sharpe and Sortino ratio calculations. Users can adjust this value from the "Risk-free rate of return" input in the "Settings/Inputs" tab.
Sharpe and Sortino ratios
The Sharpe and Sortino ratios are two of the most widely used metrics that offer insight into an investment's risk-adjusted performance . They provide a standardized framework to compare the effectiveness of investments relative to their perceived risks. These metrics can help investors determine whether the returns justify the risks taken to achieve them, promoting more informed investment decisions.
Both metrics measure risk-adjusted performance similarly. However, they have some differences in their formulas and their interpretation:
1. Sharpe ratio
The Sharpe ratio , developed by Nobel laureate William F. Sharpe, measures the performance of an investment compared to a theoretically risk-free asset, adjusted for the investment risk. The ratio uses the following formula:
Sharpe Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑎
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑎 = Standard deviation of the investment's returns (volatility)
A higher Sharpe ratio indicates a more favorable risk-adjusted return, as it signifies that the investment produced higher excess returns per unit of increase in total perceived risk.
2. Sortino ratio
The Sortino ratio is a modified form of the Sharpe ratio that only considers downside volatility , i.e., the volatility of returns below the theoretical risk-free benchmark. Although it shares close similarities with the Sharpe ratio, it can produce very different values, especially when the returns do not have a symmetrical distribution, since it does not penalize upside and downside volatility equally. The ratio uses the following formula:
Sortino Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑑
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑑 = Downside deviation (standard deviation of negative excess returns, or downside volatility)
The Sortino ratio offers an alternative perspective on an investment's return-generating efficiency since it does not consider upside volatility in its calculation. A higher Sortino ratio signifies that the investment produced higher excess returns per unit of increase in perceived downside risk.
The risk-free rate (𝑅𝑓) in the numerator of both ratio formulas acts as a baseline for comparing an investment's performance to a theoretical risk-free alternative. By subtracting the risk-free rate from the expected return (𝑅𝑎−𝑅𝑓), the numerator essentially represents the risk premium of the investment.
Comparison with another symbol
In addition to the conventional Sharpe and Sortino ratios, which compare an instrument's returns to a risk-free rate, this indicator can also compare returns to a user-specified benchmark symbol , allowing the calculation of Information ratios .
An Information ratio is a generalized form of the Sharpe ratio that compares an investment's returns to a risky benchmark , such as SPY, rather than a risk-free rate. It measures the investment's active return (the difference between its returns and the benchmark returns) relative to its tracking error (i.e., the volatility of the active return) using the following formula:
𝐼𝑅 = (𝑅𝑝 − 𝑅𝑏) / 𝑇𝐸
Where:
• 𝑅𝑝 = Average return on the portfolio or investment
• 𝑅𝑏 = Average return from the benchmark instrument
• 𝑇𝐸 = Tracking error (volatility of 𝑅𝑝 − 𝑅𝑏)
Comparing returns to a benchmark instrument rather than a theoretical risk-free rate offers unique insights into risk-adjusted performance. Higher Information ratios signify that the investment produced higher active returns per unit of increase in risk relative to the benchmark. Conventional choices for non-risk-free benchmarks include major composite indices like the S&P 500 and DJIA, as the resulting ratios can provide insight into the effectiveness of an investment relative to the broader market.
Users can enable this generalized calculation for both the Sharpe and Sortino ratios by selecting the "Benchmark symbol returns" option from the "Benchmark type" dropdown in the "Settings/Inputs" tab.
It's crucial to note that this indicator compares the charts symbol's rate of change (return) to the rate of change in the benchmark symbol. Consequently, not all symbols available on TradingView are suitable for use with these ratios due to the nature of what their values represent. For instance, using a bond as a benchmark will produce distorted results since each bar's values represent yields rather than prices, meaning it compares the rate of change in the yield. To maintain consistency and relevance in the calculated ratios, ensure the values from the compared symbols strictly represent price information.
█ FEATURES
This indicator provides traders with two widely used metrics for assessing risk-adjusted performance, generalized to allow users to compare the chart symbol's price returns to a fixed risk-free rate or the returns from another risky symbol. Below are the key features of this indicator:
Timeframe selection
The "Returns timeframe" input determines the timeframe of the calculated price returns. Users can select any value greater than or equal to the chart's timeframe. The default timeframe is "1M".
Periodic returns tracking
This indicator compounds and collects requested price returns from the selected timeframe over monthly or daily periods, similar to how the Broker Emulator works when calculating strategy performance metrics on trade data. It employs the following logic:
• Track returns over monthly periods if the chart's data spans at least two months.
• Track returns over daily periods if the chart's data spans at least two days but not two months.
• Do not track or collect returns if the data spans less than two days, as the amount of data is insufficient for meaningful ratio calculations.
The indicator uses the returns collected from up to a specified number of periods to calculate the Sharpe and Sortino ratios, depending on the available historical data. It also uses these periodic returns to calculate the average returns it displays in the Data Window.
Users can control the maximum number of periods the indicator analyzes with the "Max no. of periods used" input in the "Settings/Inputs" tab. The default value is 60 periods.
Benchmark specification
The "Benchmark return type" input specifies the benchmark type the indicator compares to the chart symbol's returns in the ratio calculations. It features the following two options:
• "Risk-free rate of return (%)": Compares the price returns to a user-specified annual rate of return representing a theoretical risk-free rate (e.g., 2%).
• "Benchmark symbol return": Compares the price returns to a selected benchmark symbol (e.g., "AMEX:SPY) to calculate Information ratios.
When comparing a chart symbol's returns to a specified benchmark symbol, this indicator aligns the times of data points from the benchmark with the times of data points from the chart's symbol to facilitate a fair comparison between symbols with different active sessions.
Visualization and display
• The indicator displays the periodic returns requested from the specified "Returns timeframe" in a separate pane. The plot includes dynamic colors to signify positive and negative returns.
• When the "Returns timeframe" value represents a higher timeframe, the indicator displays background highlights on the main chart pane to signify when a new value is available and whether the return is positive or negative.
• When the specified benchmark return type is a benchmark symbol, the indicator displays the requested symbol's returns in the separate pane as a gray line for visual comparison.
• Within the separate pane, the indicator displays a single-cell table that shows the base period it uses for periodic returns, the number of periods it uses in the calculation, the timeframe of the requested data, and the calculated Sharpe and Sortino ratios.
• The Data Window displays the chart symbol and benchmark returns, their periodic averages, and the Sharpe and Sortino ratios.
█ FOR Pine Script™ CODERS
• This script utilizes the functions from our RiskMetrics library to determine the size of the periods, calculate and collect periodic returns, and compute the Sharpe and Sortino ratios.
• The `getAlignedPrices()` function in this script requests price data for the chart's symbol and a benchmark symbol with consistent time alignment by utilizing spread symbols , which helps facilitate a fair comparison between different symbol types. Retrieving prices from spreads avoids potential information loss and data misalignment that can otherwise occur when using separate requests from each symbol's context when those symbols have different sessions or data times.
• For consistency, the `getAlignedPrices()` function includes extended hours and dividend adjustment modifiers in its data requests. Additionally, it includes other settings inherited from the chart's context, such as "settlement-as-close" preferences for fair comparison between futures instruments.
• This script uses the `changePercent()` function from our ta library to calculate the percentage changes of the requested data.
• The newly released `force_overlay` parameter in display-related functions allows indicators to display visuals on the main chart and a separate pane simultaneously. We use the parameter in this script's bgcolor() call to display background highlights on the main chart.
Look first. Then leap.
RiskMetrics█ OVERVIEW
This library is a tool for Pine programmers that provides functions for calculating risk-adjusted performance metrics on periodic price returns. The calculations used by this library's functions closely mirror those the Broker Emulator uses to calculate strategy performance metrics (e.g., Sharpe and Sortino ratios) without depending on strategy-specific functionality.
█ CONCEPTS
Returns, risk, and volatility
The return on an investment is the relative gain or loss over a period, often expressed as a percentage. Investment returns can originate from several sources, including capital gains, dividends, and interest income. Many investors seek the highest returns possible in the quest for profit. However, prudent investing and trading entails evaluating such returns against the associated risks (i.e., the uncertainty of returns and the potential for financial losses) for a clearer perspective on overall performance and sustainability.
One way investors and analysts assess the risk of an investment is by analyzing its volatility , i.e., the statistical dispersion of historical returns. Investors often use volatility in risk estimation because it provides a quantifiable way to gauge the expected extent of fluctuation in returns. Elevated volatility implies heightened uncertainty in the market, which suggests higher expected risk. Conversely, low volatility implies relatively stable returns with relatively minimal fluctuations, thus suggesting lower expected risk. Several risk-adjusted performance metrics utilize volatility in their calculations for this reason.
Risk-free rate
The risk-free rate represents the rate of return on a hypothetical investment carrying no risk of financial loss. This theoretical rate provides a benchmark for comparing the returns on a risky investment and evaluating whether its excess returns justify the risks. If an investment's returns are at or below the theoretical risk-free rate or the risk premium is below a desired amount, it may suggest that the returns do not compensate for the extra risk, which might be a call to reassess the investment.
Since the risk-free rate is a theoretical concept, investors often utilize proxies for the rate in practice, such as Treasury bills and other government bonds. Conventionally, analysts consider such instruments "risk-free" for a domestic holder, as they are a form of government obligation with a low perceived likelihood of default.
The average yield on short-term Treasury bills, influenced by economic conditions, monetary policies, and inflation expectations, has historically hovered around 2-3% over the long term. This range also aligns with central banks' inflation targets. As such, one may interpret a value within this range as a minimum proxy for the risk-free rate, as it may correspond to the minimum rate required to maintain purchasing power over time.
The built-in Sharpe and Sortino ratios that strategies calculate and display in the Performance Summary tab use a default risk-free rate of 2%, and the metrics in this library's example code use the same default rate. Users can adjust this value to fit their analysis needs.
Risk-adjusted performance
Risk-adjusted performance metrics gauge the effectiveness of an investment by considering its returns relative to the perceived risk. They aim to provide a more well-rounded picture of performance by factoring in the level of risk taken to achieve returns. Investors can utilize such metrics to help determine whether the returns from an investment justify the risks and make informed decisions.
The two most commonly used risk-adjusted performance metrics are the Sharpe ratio and the Sortino ratio.
1. Sharpe ratio
The Sharpe ratio , developed by Nobel laureate William F. Sharpe, measures the performance of an investment compared to a theoretically risk-free asset, adjusted for the investment risk. The ratio uses the following formula:
Sharpe Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑎
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑎 = Standard deviation of the investment's returns (volatility)
A higher Sharpe ratio indicates a more favorable risk-adjusted return, as it signifies that the investment produced higher excess returns per unit of increase in total perceived risk.
2. Sortino ratio
The Sortino ratio is a modified form of the Sharpe ratio that only considers downside volatility , i.e., the volatility of returns below the theoretical risk-free benchmark. Although it shares close similarities with the Sharpe ratio, it can produce very different values, especially when the returns do not have a symmetrical distribution, since it does not penalize upside and downside volatility equally. The ratio uses the following formula:
Sortino Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑑
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑑 = Downside deviation (standard deviation of negative excess returns, or downside volatility)
The Sortino ratio offers an alternative perspective on an investment's return-generating efficiency since it does not consider upside volatility in its calculation. A higher Sortino ratio signifies that the investment produced higher excess returns per unit of increase in perceived downside risk.
█ CALCULATIONS
Return period detection
Calculating risk-adjusted performance metrics requires collecting returns across several periods of a given size. Analysts may use different period sizes based on the context and their preferences. However, two widely used standards are monthly or daily periods, depending on the available data and the investment's duration. The built-in ratios displayed in the Strategy Tester utilize returns from either monthly or daily periods in their calculations based on the following logic:
• Use monthly returns if the history of closed trades spans at least two months.
• Use daily returns if the trades span at least two days but less than two months.
• Do not calculate the ratios if the trade data spans fewer than two days.
This library's `detectPeriod()` function applies related logic to available chart data rather than trade data to determine which period is appropriate:
• It returns true if the chart's data spans at least two months, indicating that it's sufficient to use monthly periods.
• It returns false if the chart's data spans at least two days but not two months, suggesting the use of daily periods.
• It returns na if the length of the chart's data covers less than two days, signifying that the data is insufficient for meaningful ratio calculations.
It's important to note that programmers should only call `detectPeriod()` from a script's global scope or within the outermost scope of a function called from the global scope, as it requires the time value from the first bar to accurately measure the amount of time covered by the chart's data.
Collecting periodic returns
This library's `getPeriodicReturns()` function tracks price return data within monthly or daily periods and stores the periodic values in an array . It uses a `detectPeriod()` call as the condition to determine whether each element in the array represents the return over a monthly or daily period.
The `getPeriodicReturns()` function has two overloads. The first overload requires two arguments and outputs an array of monthly or daily returns for use in the `sharpe()` and `sortino()` methods. To calculate these returns:
1. The `percentChange` argument should be a series that represents percentage gains or losses. The values can be bar-to-bar return percentages on the chart timeframe or percentages requested from a higher timeframe.
2. The function compounds all non-na `percentChange` values within each monthly or daily period to calculate the period's total return percentage. When the `percentChange` represents returns from a higher timeframe, ensure the requested data includes gaps to avoid compounding redundant values.
3. After a period ends, the function queues the compounded return into the array , removing the oldest element from the array when its size exceeds the `maxPeriods` argument.
The resulting array represents the sequence of closed returns over up to `maxPeriods` months or days, depending on the available data.
The second overload of the function includes an additional `benchmark` parameter. Unlike the first overload, this version tracks and collects differences between the `percentChange` and the specified `benchmark` values. The resulting array represents the sequence of excess returns over up to `maxPeriods` months or days. Passing this array to the `sharpe()` and `sortino()` methods calculates generalized Information ratios , which represent the risk-adjustment performance of a sequence of returns compared to a risky benchmark instead of a risk-free rate. For consistency, ensure the non-na times of the `benchmark` values align with the times of the `percentChange` values.
Ratio methods
This library's `sharpe()` and `sortino()` methods respectively calculate the Sharpe and Sortino ratios based on an array of returns compared to a specified annual benchmark. Both methods adjust the annual benchmark based on the number of periods per year to suit the frequency of the returns:
• If the method call does not include a `periodsPerYear` argument, it uses `detectPeriod()` to determine whether the returns represent monthly or daily values based on the chart's history. If monthly, the method divides the `annualBenchmark` value by 12. If daily, it divides the value by 365.
• If the method call does specify a `periodsPerYear` argument, the argument's value supersedes the automatic calculation, facilitating custom benchmark adjustments, such as dividing by 252 when analyzing collected daily stock returns.
When the array passed to these methods represents a sequence of excess returns , such as the result from the second overload of `getPeriodicReturns()`, use an `annualBenchmark` value of 0 to avoid comparing those excess returns to a separate rate.
By default, these methods only calculate the ratios on the last available bar to minimize their resource usage. Users can override this behavior with the `forceCalc` parameter. When the value is true , the method calculates the ratio on each call if sufficient data is available, regardless of the bar index.
Look first. Then leap.
█ FUNCTIONS & METHODS
This library contains the following functions:
detectPeriod()
Determines whether the chart data has sufficient coverage to use monthly or daily returns
for risk metric calculations.
Returns: (bool) `true` if the period spans more than two months, `false` if it otherwise spans more
than two days, and `na` if the data is insufficient.
getPeriodicReturns(percentChange, maxPeriods)
(Overload 1 of 2) Tracks periodic return percentages and queues them into an array for ratio
calculations. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
maxPeriods (simple int) : (simple int) The maximum number of periodic returns to store in the returned array.
Returns: (array) An array containing the overall percentage changes for each period, limited
to the maximum specified by `maxPeriods`.
getPeriodicReturns(percentChange, benchmark, maxPeriods)
(Overload 2 of 2) Tracks periodic excess return percentages and queues the values into an
array. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
benchmark (float) : (series float) The benchmark percentage to compare against `percentChange` values.
The function compounds non-na values from each bar within monthly or
daily periods and subtracts the results from the compounded `percentChange` values to
calculate the excess returns. For consistency, ensure this series has a similar history
length to the `percentChange` with aligned non-na value times.
maxPeriods (simple int) : (simple int) The maximum number of periodic excess returns to store in the returned array.
Returns: (array) An array containing monthly or daily excess returns, limited
to the maximum specified by `maxPeriods`.
method sharpeRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sharpe ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sharpe ratio, which estimates the excess return per unit of total volatility.
method sortinoRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sortino ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sortino ratio, which estimates the excess return per unit of downside
volatility.
TASC 2024.07 Gaps and Extreme Closes█ OVERVIEW
This script, inspired by Perry Kaufman's article "Trading Opening Gaps and Extreme Closes in Stocks" from the TASC's July 2024 edition of Traders' Tips , provides analytical insights into stock price behaviors following significant price moves. The information about the frequency, pullbacks, and closing patterns of these extreme price movements can aid in developing more effective trading strategies by understanding what to expect during volatile market conditions.
█ CONCEPTS
Perry Kaufman's article investigates the behavior of stock prices following substantial opening gaps and extreme closing moves to identify patterns and expectations that traders can utilize to make informed decisions. The motivation behind the article is to offer traders a more scientific approach to understanding price movements during volatile market conditions, particularly during earnings season or significant economic events. Kaufman's analysis reveals that stock prices have a history of exhibiting certain behaviors after substantial price gaps and extreme closes. This script follows Perry Kaufman's study and helps provide insight into how prices often behave after significant price changes. This analysis can help traders establish price movement expectations and potential strategies for trading such occurrences.
█ CALCULATIONS
Input Parameters:
This script offers users the choice to analyze "Opening Gaps" or "Extreme Closes" for price movements of different predefined magnitudes in a specified direction ("Upward" or "Downward").
Outputs:
Based on the specified inputs, the script performs the following calculations for the active ticker displayed on the chart:
Frequency of Extreme Price Movements : Quantifies the occurrences of directional price movements within predefined percentage ranges.
Average Pullbacks : Computes the average retracement (pullback) from analyzed price movements within each percentage range.
Average Closes : Analyzes the typical closing behavior relative to the directional price movements within each range.
The script organizes the results from these calculations within the table on a separate chart pane, providing users with helpful insights into how a stock historically behaved following significant price movements.
Dickey-Fuller Test for Mean Reversion and Stationarity **IF YOU NEED EXTRA SPECIAL HELP UNDERSTANDING THIS INDICATOR, GO TO THE BOTTOM OF THE DESCRIPTION FOR AN EVEN SIMPLER DESCRIPTION**
Dickey Fuller Test:
The Dickey-Fuller test is a statistical test used to determine whether a time series is stationary or has a unit root (a characteristic of a time series that makes it non-stationary), indicating that it is non-stationary. Stationarity means that the statistical properties of a time series, such as mean and variance, are constant over time. The test checks to see if the time series is mean-reverting or not. Many traders falsely assume that raw stock prices are mean-reverting when they are not, as evidenced by many different types of statistical models that show how stock prices are almost always positively autocorrelated or statistical tests like this one, which show that stock prices are not stationary.
Note: This indicator uses past results, and the results will always be changing as new data comes in. Just because it's stationary during a rare occurrence doesn't mean it will always be stationary. Especially in price, where this would be a rare occurrence on this test. (The Test Statistic is below the critical value.)
The indicator also shows the option to either choose Raw Price, Simple Returns, or Log Returns for the test.
Raw Prices:
Stock prices are usually non-stationary because they follow some type of random walk, exhibiting positive autocorrelation and trends in the long term.
The Dickey-Fuller test on raw prices will indicate non-stationary most of the time since prices are expected to have a unit root. (If the test statistic is higher than the critical value, it suggests the presence of a unit root, confirming non-stationarity.)
Simple Returns and Log Returns:
Simple and log returns are more stationary than prices, if not completely stationary, because they measure relative changes rather than absolute levels.
This test on simple and log returns may indicate stationary behavior, especially over longer periods. (The test statistic being below the critical value suggests the absence of a unit root, indicating stationarity.)
Null Hypothesis (H0): The time series has a unit root (it is non-stationary).
Alternative Hypothesis (H1): The time series does not have a unit root (it is stationary)
Interpretation: If the test statistic is less than the critical value, we reject the null hypothesis and conclude that the time series is stationary.
Types of Dickey-Fuller Tests:
1. (What this indicator uses) Standard Dickey-Fuller Test:
Tests the null hypothesis that a unit root is present in a simple autoregressive model.
This test is used for simple cases where we just want to check if the series has a consistent statistical property over time without considering any trends or additional complexities.
It examines the relationship between the current value of the series and its previous value to see if the series tends to drift over time or revert to the mean.
2. Augmented Dickey-Fuller (ADF) Test:
Tests for a unit root while accounting for more complex structures like trends and higher-order correlations in the data.
This test is more robust and is used when the time series has trends or other patterns that need to be considered.
It extends the regular test by including additional terms to account for the complexities, and this test may be more reliable than the regular Dickey-Fuller Test.
For things like stock prices, the ADF would be more appropriate because stock prices are almost always trending and positively autocorrelated, while the Dickey-Fuller Test is more appropriate for more simple time series.
Critical Values
This indicator uses the following critical values that are essential for interpreting the Dickey-Fuller test results. The critical values depend on the chosen significance levels:
1% Significance Level: Critical value of -3.43.
5% Significance Level: Critical value of -2.86.
10% Significance Level: Critical value of -2.57.
These critical values are thresholds that help determine whether to reject the null hypothesis of a unit root (non-stationarity). If the test statistic is less than (or more negative than) the critical value, it indicates that the time series is stationary. Conversely, if the test statistic is greater than the critical value, the series is considered non-stationary.
This indicator uses a dotted blue line by default to show the critical value. If the test-static, which is the gray column, goes below the critical value, then the test-static will become yellow, and the test will indicate that the time series is stationary or mean reverting for the current period of time.
What does this mean?
This is the weekly chart of BTCUSD with the Dickey-Fuller Test, with a length of 100 and a critical value of 1%.
So basically, in the long term, mean-reversion strategies that involve raw prices are not a good idea. You don't really need a statistical test either for this; just from seeing the chart itself, you can see that prices in the long term are trending and no mean reversion is present.
For the people who can't understand that the gray column being above the blue dotted line means price doesn't mean revert, here is a more simple description (you know you are):
Average (I have to include the meaning because they may not know what average is): The middle number is when you add up all the numbers and then divide by how many numbers there are. EX: If you have the numbers 2, 4, and 6, you add them up to get 12, and then divide by 3 (because there are 3 numbers), so the average is 4. It tells you what a typical number is in a group of numbers.
This indicator checks if a time series (like stock prices) tends to return to its average value or time.
Raw prices, which is just the regular price chart, are usually not mean-reverting (It's "always" positively autocorrelating but this group of people doesn't like that word). Price follows trends.
Simple returns and log returns are more likely to have periods of mean reversion.
How to use it:
Gray Column (the gray bars) Above the Blue Dotted Line: The price does not mean revert (non-stationary).
Gray Column Below Blue Line: The time series mean reverts (stationary)
So, if the test statistic (gray column) is below the critical value, which is the blue dotted line, then the series is stationary and mean reverting, but if it is above the blue dotted line, then the time series is not stationary or mean reverting, and strategies involving mean reversion will most likely result in a loss given enough occurrences.
[Pandora] Error Function Treasure Trove - ERF/ERFI/Sigmoids+PRAISE:
At this time, I have to graciously thank the wonderful minds behind the new "Pine Profiler Mode" (PPM). Directly prior to this release, it allowed me to ascertain script performance even more. While I usually write mostly in highly optimized Pine code, PPM visually identified a few bottlenecks that would otherwise be hard to identify. Anyone who contributed to PPMs creation and testing before release... BRAVO!!! I commend all of those who assisted in it's state-of-the-art engineering and inception, well done!
BACKSTORY:
This script is specifically being released in defense of another member, an exceptionally unique PhD. It was brought to my attention that a script-mod-event occurred, regarding the publishing of a measly antiquated error function (ERF) calculation within his script. This sadly resulted in the now former member jumping ship after receiving unmannerly responses amidst his curious inquiries as to why his erf() was modded. To forbid rusty and rudimentary formulations because a mod-on-duty is temporally offended by a non-nefarious release of code, is in MY opinion an injustice to principles of perpetuating open-source code intended to benefit thousands to millions of community members. While Pine is the heart and soul of TV, the mathematical concepts contributed from the minds of members is the inspirational fuel of curiosity that powers it's pertinent reason to exist and evolve.
It is an indisputable fact that most members are not greatly skilled Pine Poets. Many members may be incapable of innovating robust function code in Pine, even if they have one or more PhDs. We ALL come from various disciplines of mathematical comprehension and education. Some mathematicians are not greatly skilled at coding, while some coders are not exceptional at math. So... what am I to do to attempt to resolve this circumstantial challenge??? Those who know me best are aware that I will always side with "the right side of history" in order to accomplish my primary self-defined missions I choose to accept. Serving as an algorithmic advocate, I felt compelled to intercede by compiling numerous error functions into elegant code of very high caliber that any and every TV member may choose to employ, so this ERROR never happens again.
After weeks of contemplation into algorithms I knew little about, I prioritized myself to resolve an unanticipated matter by creating advanced formulas of exquisitely crafted error functions refined to the best of my current abilities. My aversion for unresolved problems motivated me to eviscerate error function insufficiencies with many more rigid formulations beyond what is thought to exist. ERF needed a proper algorithmic exorcism anyways. In my furiosity, I contemplated an array of madMAXimum diplomatic demolition methods, choosing the chain saw massacre technique to slaughter dysfunctionalities I encountered on a battered ERF roadway. This resulted in prolific solutions that should assuredly endure the test of time. Poetically, as you will come to see, I am ripping the lid off of Pandora's box of error functions in this case to correct wrongs into a splendid bundle of rights for members.
INTENTION:
Error function (ERF) enthusiasts... PREPARE FOR GLORY!! The specific purpose of this script is to deprecate classic error functions with the creation of a fierce and formidable army of superior formulations, each having varying attributes of computational complexity with differing absolute error ranges in their results for multiple compute scenarios. This is NOT an indicator... It is intended to allow members to embark on endeavors to advance the profound knowledge base of this growing worldwide community of 60+ million inquisitive minds. For those of you who believe computational mathematics and statistics is near completion at its finest; I am here to inform you, this is ridiculous to ponder. We are no where near statistical excellence that can and will exist eventually. At this time, metaphorically speaking, we are merely scratching microns off of the surface of the skin of a statistical apple Isaac Newton once pondered.
THIS RELEASE:
Following weeks of pondering methodical experiments beyond the ordinary, I am liberating these wild notions of my error function explorations to the entire globe as copyleft code, not just Pine. This Pandora's basket of ERFs is being openly disclosed for the sake of the sanctity of mathematics, empirical science (not the garbage we are told by CONTROLocrats to blindly trust), revolutionary cutting edge engineering, cosmology, physics, information technology, artificial intelligence, and EVERY other mathematical branch of human knowledge being discovered over centuries. I do believe James Glaisher would favor my aims concerning ERF aspirations embracing the "Power of Pine".
The included functions are intended for TV members to use in any way they see fit. This is a gift to ALL members to foster future innovative excellence on this platform. Any attempt to moderate this code without notification of "self-evident clear and just cause" will be considered an irrevocable egregious action. The original foundational PURPOSE of establishing script moderation (I clearly remember) was primarily to maintain active vigilance over a growing community against intentional nefarious actions and/or behaviors in blatant disrespect to other author's works AND also thwart rampant copypasting bandit operations, all while accommodating balanced principles of fairness for an educational community cause via open source publishing that should support future algorithmic inventions well beyond my lifespan.
APPLICATIONS:
The related error functions are used in probability theory, statistics, and numerous and engineering scientific disciplines. Its key characteristics and applications are innumerable in computational realms. Its versatility and significance make it a fundamental tool in arenas of quantitative analysis and scientific research...
Probability Theory - Is widely used in probability theory to calculate probabilities and quantiles of the normal distribution.
Statistics - It's related to the Gaussian integral and plays a crucial role in statistics, especially in hypothesis testing and confidence interval calculations.
Physics - In physics, it arises in the study of diffusion equations, quantum mechanics, and heat conduction problems.
Engineering - Applications exist in engineering disciplines such as signal processing, control theory, and telecommunications.
Error Analysis - It's employed in error analysis and uncertainty quantification.
Numeric Approximations - Due to its lack of a closed-form expression, numerical methods are often employed to approximate erf/erfi().
AI, LLMs, & MACHINE LEARNING:
The error function (ERF) is indispensable to various AI applications, particularly due to its relation to Gaussian distributions and error analysis. It is used in Gaussian processes for regression and classification, probabilistic inference for Bayesian networks, soft margin computation in SVMs, neural networks involving Gaussian activation functions or noise, and clustering algorithms like Gaussian Mixture Models. Improved ERF approximations can enhance precision in these applications, reduce computational complexity, handle outliers and noise better, and improve optimization and convergence, possibly leading to more accurate, efficient, and robust AI systems.
BONUS ALGORITHMS:
While ERFs are versatile, its opposite also exists in the form of inverse error functions (ERFIs). I have also included a modified form of the inverse fisher transform along side MY sigmoid (sigmyod). I am uncertain what sigmyod() may be used for, but it's a culmination of my examinations deep into "sigmoid domains", something I am fascinated by. Whatever implications it may possess, I am unveiling it along with it's cousin functions. For curious minds, this quality of composition seen here is ideally what underlies what I would term "Pandora functionality" that empowers my Pandora indication. I go through hordes of formulations, testing, and inspection to find what appears to be the most beneficial logical/mathematical equation to apply...
SCRIPT OPERATION:
To showcase the characteristics and performance of my ERF/ERFI formulations, I devised a multi-modal script. By using bar_index , I generated a broad sequence of numeric values to input into the first ERF/ERFI parameter. These sequences allow you to inspect the contours of the error function's outputs for both ERF and ERFI. When combined with compute-intensive precision functions (CIPFs), the polynomial function output values can be subtracted from my CIPFs to obtain results of absolute error, displaying the accuracy of the many polynomial estimation functions I tuned in testing for Pine's float environment.
A host of numeric input settings are wildly adjustable to inspect values/curvatures across the range of numeric input sequences. Very large numbers, such as Divisor:100,000,100/Offset:200,000,000 for ERF modes or... Divisor:100,000,100/Offset:100,000,000 for ERFI modes, will display miniscule output values calculated from input values in close proximity to 0.0 for the various estimates, similar to a microscope. ERFI approximations very near in proximity to +/-1.0 will always yield large deviations of absolute error. Dragging/zooming your chart or using the Offset input will aid with visually clipping off those ERFI extremes where float precision functions cannot suffice.
NOTICE:
perf() and perfi() are intended for precision computation (as good as it basically gets) in a float environment. However, they are CPU intensive (especially perfi). I wouldn't recommend these being used in ANY Pine script unless it's an "absolute necessity" to do so to accomplish your goal. I only built them to obtain "absolute error curvatures" of the error functions for the polynomial approximations. These are visible in the accuracy modes in the indicator Settings.
Swing DistanceHello fellas,
This simple indicator helps to visualize the distance between swings. It consists of two lines, the highest and the lowest line, which show the highest and lowest value of the set lookback, respectively. Additionally, it plots labels with the distance (in %) between the highest and the lowest line when there is a change in either the highest or the lowest value.
Use Case:
This tool helps you get a feel for which trades you might want to take and which timeframe you might want to use.
Side Note: This indicator is not intended to be used as a signal emitter or filter!
Best regards,
simwai
Ticker Performance ComparisonTicker Performance Comparison Indicator
With this tool you can compare how three different tickers of your choice have performed over a specific period you choose. It can be used on any timeframe.
As you can see in the image above, I am comparing Nvidia, Bitcoin and Wadzpay over a 365 day period. This shows me at glance which asset has done better and by how much.
It shows how the closing prices have changed from the start of your chosen period to now, by automatically drawing lines on the same scale.
Key Features:
Lookback Period: You decide how many bars (days, weeks, etc.) back to look from today.
Three Tickers: Enter up to three different ticker symbols to see how they stack up against each other
Percentage Change: The tool calculates how much each ticker's closing price has changed, in percentage terms, from the start of your lookback period.
Performance Labels: Labels at the end of the period show the percentage change for each ticker.
Important:
Ignore the lines that are drawn before your lookback period: The lines before your chosen lookback period might be misleading. They appear due to the way historical data is processed and should be ignored. Only consider the data and trends from the start of the lookback period you entered to the present for an accurate comparison.
Use this tool to easily compare how different assets have performed over the timeframe that matters to you.
Downside DeviationDownside deviation is a measure of downside risk that focuses on returns that fall below a minimum threshold or minimum acceptable return (MAR). It is used in the calculation of the Sortino ratio, a measure of risk-adjusted return. The Sortino ratio is like the Sharpe ratio, except that it replaces the standard deviation with downside deviation.