the alpha and omega : topic:mathstat
http://mahalanobis.twoday.net/
Mahalanobis
Mahalanobis
20110527T07:40:46Z
en
hourly
1
20000101T00:00:00Z
the alpha and omega
http://static.twoday.net/mahalanobis/images/icon.gif
http://mahalanobis.twoday.net/

E[e^sX] when X is normally distributed
http://mahalanobis.twoday.net/stories/5410933/
Recently, I asked myself the question: "Why is the expected value of e^sX (X ~ N(0,1)) equal to e^(0.5*s^2)?". You know that compounding a sum of money (A) at a continously compounded rate X for a period involves multiplying it by e^X. In case X is nonrandom, the expected value is A*e^X. In case X is random, calculating the expected value becomes quite difficult. The solution for a standard normally distributed X can be derived as follows:<br />
<br />
1. Write down the equation (μ = E[e^sX] where X ~ N(0,1)):<br />
<img title="" height="67" alt="gau01" width="261" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/gau01.gif" />2. Substitute y = x  s and you get:<img title="" height="338" alt="gau2" width="404" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/gau2.gif" />More general, if X ~ N(μ*, σ<sup>2</sup>), then X = μ* + σZ with Z ~N(0,1). It follows that<img title="" height="219" alt="gau03" width="210" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/gau03.gif" />Of course, practitioners often know the parameters they are interested in (e.g. s = 0.3), so they can take a shortcut and run a monte carlo analysis:<br />
<br />
s < 0.3<br />
n < 10000<br />
x < rnorm(n)<br />
answer < mean(exp(s*x))<br />
<br />
Or a bit more sophisticated and less arbitrary:<br />
<br />
s < 0.3<br />
n < 10000<br />
i < (1:n)/(n+1)<br />
x < qnorm(i)<br />
answer < mean(exp(s*x))<br />
<br />
NB: The expected value <a href="http://mahalanobis.twoday.net/stories/5402240/">can be very misleading</a>.<br />
Source: <a href="http://static.twoday.net/mahalanobis/files/GeldFinanz.pdf">Stochastische Grundlagen der Finanzmathematik</a>, Klaus Pötzelberger
Mahalanobis
mathstat
Copyright © 2008 Mahalanobis
20081227T21:58:00Z

Volatility hurts.
http://mahalanobis.twoday.net/stories/5402240/
Ernest Chan (<a href="http://epchan.blogspot.com/">blog</a>) writes in his book <a href="http://www.amazon.com/QuantitativeTradingBuildAlgorithmicBusiness/dp/0470284889">Quantitative Trading  How to Build Your Own Algorithmic Trading Business</a>:
<blockquote>Here is a little puzzle that may stymie many a professional trader. Suppose a certain stock exhibits a true (geometric) random walk, by which I mean there is a 5050 chance that the stock is going up 1 percent or down 1 percent every [day]. If you buy this stock, are you most likelyin the long run and ignoring financing coststo make money, lose money, or be flat?<br />
Most traders will blur out the answer "Flat!," and that is wrong. The correct answer is that you will lose money, at the rate of 0.005 percent (or 0.5 basis points) every [day]! This is because for a geometric random walk, the average compounded rate of return is not the return μ, but is g = μ  σ^2/2.</blockquote>
For this very reason, geometric Brownian Motion is often written as<br />
<img title="" height="85" alt="bm" width="329" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm.gif" />where "μ  σ^2/2" is the expected return and "μ" is the return of the expected prices, i.e. ln(E[S<sub>t</sub>]/E[S<sub>t1</sub>]).<br />
<br />
If you lose 50 percent of your portfolio, you have to make 100 percent to get back to even... that's what everybody knows. But it's also interesting to see how mild volatiltiy hurts over time. Here are ten (random  no cherry picking) realizations of a geometric Brownian Motion with a daily volatility of 1% (i.e. a yearly volatility of 16% when having 252 trading days) over the period of 100 years:<br />
<img title="" height="367" alt="bm02" width="469" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm02.gif" /><br />
<img title="" height="360" alt="bm01" width="474" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm01.gif" /><br />
<img title="" height="368" alt="bm03" width="471" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm03.gif" /><br />
<img title="" height="360" alt="bm04" width="467" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm04.gif" /><br />
<img title="" height="355" alt="bm05" width="471" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm05.gif" /><br />
<img title="" height="355" alt="bm06" width="472" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm06.gif" /><br />
<img title="" height="353" alt="bm07" width="474" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm07.gif" /><br />
<img title="" height="360" alt="bm08" width="472" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm08.gif" /><br />
<img title="" height="359" alt="bm09" width="471" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm09.gif" /><br />
<img title="" height="361" alt="bm10" width="471" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/bm10.gif" /><br />
<a href="http://www.rproject.org">R Development Core Team</a> (2008). R: A language and environment for statistical computing.<br />
<br />
PS: Hey, this looks promising:<br />
<img title="" height="356" alt="veryprom" width="472" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/veryprom.gif" />
Mahalanobis
mathstat
Copyright © 2008 Mahalanobis
20081222T08:07:00Z

The Skill versus Luck Paradox cont'd
http://mahalanobis.twoday.net/stories/3733213/
<a href="http://mahalanobis.twoday.net/stories/664500/">A while ago</a>, we adressed the following question: A portfolio manager knows that his strategy can, on average, outperform the benchmark index by 3% annually. His portfolio has an annual volatility (standard deviation) of 25% against the index's 15%. Assuming that the correlation between the returns of the portfolio and the returns of the index is 0.9, how many years would it take to outperform the index with 90% probability? <br />
<br />
The correct answer is a whopping 300 years! (<a href="http://mahalanobis.twoday.net/stories/664500/">apply the ItôDöblin formula</a>)<br />
<br />
Today somebody asked me if I could run a couple of simulations to get a better understanding of the result. What I did was plot 20 simulations of log(Portfolio/Index) for varying correlations. For ρ = 0.9 2 out of 20 (10%) areas expectedbelow zero:<br />
<img title="" height="807" alt="skillluckparadox" width="486" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/skillluckparadox.gif" />
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070515T19:50:00Z

My Master's Thesis is Significant
http://mahalanobis.twoday.net/stories/3565003/
By strolling through the library I figured out that the number of pages of my master's thesis is five standard deviations lower than the average number of pages my friends have written to graduate from the <a href="http://www.wuwien.ac.at"></a>Vienna University of Economics and Business Administration:<br />
<br />
<TABLE width="500" border="1" ><TR><TD align="left" width="130">
<b>Author</b>
</TD>
<TD align="center" width="50">
<b>Pages</b>
</TD>
<TD align="left" >
<b>Title</b>
</TD></TR><TR><TD align="left" width="130">
Michael Sigmund
</TD>
<TD align="center" width="50">
139
</TD>
<TD align="left" >
Anwendungsgebiete der Spieltheorie in den Sozialwissenschaften
</TD></TR><TR><TD align="left" width="130">
Robert Ferstl
</TD>
<TD align="center" width="50">
127
</TD>
<TD align="left" >
Werkzeuge zur Analyse räumlicher Daten  eine Softwareimplementation in EViews und MATLAB
</TD></TR><TR><TD align="left" width="130">
Karin Doppelbauer
</TD>
<TD align="center" width="50">
122
</TD>
<TD align="left" >
Analiz rʹinka mjasa kur v Rossii  Eine Analyse des russischen Geflügelfleischmarktes
</TD></TR><TR><TD align="left" width="130">
Markus Pock
</TD>
<TD align="center" width="50">
107
</TD>
<TD align="left" >
Untersuchungen zu Wachstumseffekten der WWU mittels Zeitreihenanalyse
</TD></TR><TR><TD align="left" width="130">
Christian Kraxner
</TD>
<TD align="center" width="50">
105
</TD>
<TD align="left" >
Using credit derivatives for managing corporate bond portfolios
</TD></TR><TR><TD align="left" width="130">
Christian Balbier
</TD>
<TD align="center" width="50">
104
</TD>
<TD align="left" >
Föderale Strukturen in den neuen Mitgliedsstaaten der Europäischen Union
</TD></TR><TR><TD align="left" width="130">
Stefan Woytech
</TD>
<TD align="center" width="50">
103
</TD>
<TD align="left" >
Harmonisierung von internem und externem Rechnungswesen auf Basis der IAS/IFRS
</TD></TR><TR><TD align="left" width="130">
Anton Burger
</TD>
<TD align="center" width="50">
92
</TD>
<TD align="left" >
Reasons for the U.S. Growth Experience in the Nineties: Non Keynesian Effects, the Capital Market and Technology
</TD></TR><TR><TD align="left" width="130">
Michael Stastny
</TD>
<TD align="center" width="50">
<b>31</b>
</TD>
<TD align="left" >
<a href="http://nr11029.vhostenzo.sil.at/thesis_stastny.pdf">Economic Growth and Output Variability: An Empirical Analysis</a> (pdf)
</TD></TR></TABLE><br />
The outlier status of my thesis is confirmed by conventional tests for outliers:<br />
<img title="" height="346" alt="outliers" width="475" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/outliers.gif" /><br />
How cool is that?
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070412T16:11:00Z

"almost sure" versus "sure"
http://mahalanobis.twoday.net/stories/3519046/
The difference between an event being <i>almost sure</i> and <i>sure</i> is the same as the subtle difference between something happening <i>with probability 1</i> and happening <i>always</i>.<br />
<br />
If an event is <i>sure</i>, then it will always happen. No other event (even events with probability 0) can possibly occur. If an event is <i>almost sure</i>, then there are other events that could happen, but they happen <i>almost never</i>, that is with probability 0.<br />
<br />
<b>Cool Example</b>: Throwing a dart<br />
<img title="" height="293" alt="dbot" width="168" align="right" class="right" src="http://static.twoday.net/mahalanobis/images/dbot.jpg" /><br />
For example, imagine throwing a dart at a square, and imagine that this square is the only thing in the universe. There is physically nowhere else for the dart to land. Then, the event that "the dart hits the square" is a sure event. No other alternative is imaginable.<br />
<br />
Next, consider the event that "the dart hits the diagonal of the square exactly". The probability that the dart lands on any subregion of the square is equal to the area of that subregion. But, since the area of the diagonal of the square is zero, the probability that the dart lands exactly on the diagonal is zero. So, the dart will <b>almost surely</b> not land on the diagonal, or indeed any other given line or point. Notice that even though there is zero probability that it will happen, it is still possible.<br />
<br />
Source: <a href="http://en.wikipedia.org/wiki/Almost_surely">Wikipedia</a>
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070402T20:37:00Z

The waiting time paradox
http://mahalanobis.twoday.net/stories/3486587/
There is an old conundrum in queueing theory that goes like this:
<ul>
<li>A passenger arrives at a busstop at some arbitrary point in time</li>
<li>Buses arrive according to a Poisson process</li>
<li>The mean interval between the buses is 10 min.</li>
</ul>
What is the mean waiting time until the next bus?<br />
<img title="" height="136" alt="busline" width="322" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/busline.gif" /><br />
Answer: 10 min. This is an example of lengthbiased sampling. The explanation of the paradox lies therein that the passengers' probability to arrive during a long interarrival interval is greater than during a short interval. <b>]</b> <a href="http://static.twoday.net/mahalanobis/images/waitingtime.jpg">Here</a> is a neat nontechnical explanation (taken from <a href="http://www.amazon.com/ProbabilitiesLittleNumbersThatLives/dp/0470040017/ref=si3_rdr_bb_product/10475137979551140">this</a> book). <b>[</b><br />
<br />
Given the interarrival interval, within that interval the arrival instant of the passanger is uniformly distributed and the expected waiting time is one half of the total duration of the interval. The point is that in the selection by the random instant the long intervals are more frequently represented than the short ones (with a weight proportional to the length of the interval).<br />
<br />
Consider a long period of time t. The waiting time to the next bus arrival W(τ) as a function of the arrival instant τ of the passenger is represented by:<br />
<img title="" height="209" alt="waitt" width="400" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt.gif" />where the X<sub>i</sub> are the interarrival intervals. The mean waiting time, W_bar, is the average value of this sawtooth curve:<img title="" height="77" alt="waitt01" width="377" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt01.gif" /><br />
Note that long interarrival intervals contribute much more than short ones to the average waiting time. As t grows, t/n > X_bar, hence,<br />
<img title="" height="90" alt="waitt02" width="347" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt02.gif" />For exponential distribution (as the X<sub>i</sub> are distributed),<br />
<img title="" height="76" alt="waitt03" width="271" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt03.gif" /><a href="http://en.wikipedia.org/wiki/Variance">Therefore</a>,<br />
<img title="" height="50" alt="waitt04" width="380" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt04.gif" />Altogether,<br />
<img title="" height="41" alt="waitt05" width="245" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/waitt05.gif" /><br />
Q.E.D.<br />
<br />
Sources:<br />
<a href="http://cs.haifa.ac.il/courses/AdvancedOS/">Advanced Course in Operating Systems</a> (University of Haifa), Lecture 1 & 2
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070328T01:42:00Z

ZunZun :: Curve fitting on the web
http://mahalanobis.twoday.net/stories/3482448/
<a href="http://neweconomist.blogs.com/new_economist/">New Economist</a> writes: <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2007/03/curve_fitting_o.html">In a new post</a> on the Statistical Modeling, Causal Inference, and Social Science blog, <a href="http://www.stat.columbia.edu/~jakulin/">Aleks Jakulin</a> at Columbia University points us to a great online tool, <a href="http://zunzun.com">ZunZun</a>. It lets you use 2 and 3 dimensional 'Function Finders' to 'help determine the best curve fit for your data'."<br />
<br />
On <a href="http://pages.stern.nyu.edu/~wgreene/Text/econometricanalysis.htm">William Greene's site</a> I found a neat data set (Data Tables :: Table F6.1) for estimating a <a href="http://en.wikipedia.org/wiki/CobbDouglas">CobbDouglas production function</a>. ZunZun comes up with the following suggestion:
<p align="center">Y = β<sub>1</sub>( L<sup>0.5</sup>K<sup>0.5</sup>) + β<sub>2</sub>(cos(L)K<sup>1.5</sup>)</p>
Surface Plot:<br />
<img title="" height="352" alt="cobb_surface" width="353" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/cobb_surface.gif" /><br />
Contour Plot:<br />
<img title="" height="369" alt="cobb_contour" width="422" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/cobb_contour.gif" /><br />
The R<sup>2</sup> reaches an unrealistic 0.968 (0.94 for the CobbDouglas specification). Textbook data... Here is a scatterplot of the logarithmized data (created with <a href="http://www.rproject.org">R</a>):<br />
<img title="" height="347" alt="scatter3d" width="451" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/scatter3d.gif" />
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070327T02:50:00Z

Proof without Words: Geometric Series
http://mahalanobis.twoday.net/stories/3472911/
<img title="" height="407" alt="viewp01" width="404" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/viewp01.gif" /><br />
<img title="" height="403" alt="viewp02" width="401" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/viewp02.gif" /><br />
<br />
The Viewpoints 2000 Group<br />
<a href="http://links.jstor.org/sici?sici=0025570X%28200110%2974%3A4%3C320%3APWWGS%3E2.0.CO%3B2F">Mathematics Magazine, Vol. 74, No. 4. (Oct., 2001), p. 320.</a><br />
<br />
related items:<br />
<a href="http://mahalanobis.twoday.net/stories/2796923/">Neat Proofs</a>, Mahalanobis
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070324T02:16:00Z

I'm Confused: How Could Information Equal Entropy?
http://mahalanobis.twoday.net/stories/3464021/
Taken from the <a href="http://www.ccrnp.ncifcrf.gov/~toms/bionet.infotheory.faq.html">bionet.infotheory FAQ</a>: If someone says that information = uncertainty = entropy, then they are confused, or something was not stated that should have been. Those equalities lead to a contradiction, since entropy of a system increases as the system becomes more disordered. So information corresponds to disorder according to this confusion.<br />
<br />
If you always take <b>information to be a decrease in uncertainty</b> at the receiver and you will get straightened out:<br />
<img title="" height="32" alt="entro01" width="181" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/entro01.gif" /><br />
where H is the Shannon uncertainty:<br />
<img title="" height="58" alt="entro02" width="187" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/entro02.gif" /><br />
and p<sub>i</sub> is the probability of the i<i>th</i> symbol. If you don't understand this, read the short <a href="http://www.ccrnp.ncifcrf.gov/~toms/paper/primer/">Information Theory Primer</a>.<br />
<br />
Imagine that we are in communication and that we have agreed on an alphabet. Before I send you a bunch of characters, you are uncertain (H<sub>before</sub>) as to what I'm about to send. After you receive a character, your uncertainty goes down (to H<sub>after</sub>). H<sub>after</sub> is never zero because of noise in the communication system. Your decrease in uncertainty is the information (I) that you gain.<br />
<br />
Since H<sub>before</sub> and H<sub>after</sub> are state functions, this makes I a function of state. It allows you to lose information (it's called forgetting).<br />
<br />
Many of the statements in the early literature assumed a noiseless channel, so the uncertainty after receipt is zero (H<sub>after</sub>=0). This leads to the SPECIAL CASE where I = H<sub>before</sub>. But H<sub>before</sub> is NOT "the uncertainty", it is the uncertainty of the receiver BEFORE RECEIVING THE MESSAGE.
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070322T01:14:00Z

Brainiacs Succeed in Mapping 248Dimensional Object
http://mahalanobis.twoday.net/stories/3455181/
<img title="" height="181" alt="e8" width="180" align="right" class="right" src="http://static.twoday.net/mahalanobis/images/e8.jpg" /><b>MIT news office</b>: An international team of 18 mathematicians has mapped one of the largest and most complicated structures in mathematics. If written out on paper, the calculation describing this structure, known as E<sub>8</sub>, would cover an area the size of Manhattan. <br />
<br />
The work is important because it could lead to new discoveries in mathematics, physics and other fields. In addition, the innovative largescale computing that was key to the work likely spells the future for how longstanding math problems will be solved in the 21st century. Click <a href="http://web.mit.edu/newsoffice/2007/e8.html">here</a> (or <a href="http://www.sciam.com/article.cfm?articleID=6C66165BE7F299DF3D86B476FFD18F17&ref=sciam&chanID=sa003">here</a> ) to continue.<br />
<br />
related items:<br />
American Institute of Mathematics: <a href="http://aimath.org/E8/">E<sub>8</sub></a><br />
Lecture Slides: <a href="http://math.mit.edu/~dav/E8TALK.pdf">The Character Table for E8, or How We Wrote Down a 453,060 x 453,060 Matrix and Found Happiness</a>, David Vogan, MIT
Mahalanobis
mathstat
Copyright © 2007 Mahalanobis
20070320T01:35:00Z

MARS :: Multivariate Adaptive Regression Splines
http://mahalanobis.twoday.net/stories/2870138/
<blockquote>"Multivariate Adaptive Regression Splines (MARS) is an implementation of techniques popularized by <a href="http://wwwstat.stanford.edu/~jhf/">Friedman</a> (1991) for solving regressiontype problems.<br />
<br />
MARS is a nonparametric regression procedure that makes no assumption about the underlying functional relationship between the dependent and [explanatory] variables. Instead, MARS constructs this relation from a set of coefficients and basis functions that are entirely "driven" from the regression data. In a sense, the method is based on the "divide and conquer" strategy, which partitions the input space into regions, each with its own regression equation. This makes MARS particularly suitable for problems with higher input dimensions (i.e., with more than 2 variables), where the <a href="http://www.statsoft.com/textbook/glosc.html#Curse">curse of dimensionality</a> [see also <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2004/10/the_blessing_of.html">blessing of dimensionality</a>] would likely create problems for other techniques.<br />
<br />
The MARSplines technique has become particularly popular in the area of data mining because it does not assume or impose any particular type or class of relationship (e.g., linear, logistic, etc.) between the predictor variables and the dependent (outcome) variable of interest. Instead, useful models (i.e., models that yield accurate predictions) can be derived even in situations where the relationship between the predictors and the dependent variables is nonmonotone and difficult to approximate with parametric models." [<a href="http://www.statsoft.com/textbook/stmars.html">Continue</a>]</blockquote>
Note to self: Read
<ul>
<li><a href="http://links.jstor.org/sici?sici=00905364%28199103%2919%3A1%3C1%3AMARS%3E2.0.CO%3B2D">Multivariate Adaptive Regression Splines</a>, Jerome H. Friedman, <i>The Annals of Statistics</i>, Vol. 19, No. 1. (Mar., 1991), pp. 167.</li>
<li><a href="http://www.amazon.co.uk/ElementsStatisticalLearningPredictionStatistics/dp/0387952845">The Elements of Statistical Learning: Data Mining, Inference and Prediction</a>, Hastie, T., Tibshirani, R. and Friedman, J.H. (2001), Springer Verlag, New York</li>
</ul>
Hat tip to <a href="http://www.itp.phys.ethz.ch/econophysics/R/contact.htm">Diethelm Würtz</a>
Mahalanobis
mathstat
Copyright © 2006 Mahalanobis
20061030T15:15:00Z

Taylor Effect and Empirical Scaling Law :: Rmetrics
http://mahalanobis.twoday.net/stories/2835225/
<b>Taylor Effect</b>: It is by now well established in the financial econometrics literature that high frequency time series of financial returns are often uncorrelated but not independent because there are nonlinear transformations which are positively correlated. In 1986 Taylor observed that the empirical sample autocorrelations of absolute returns, r, are usually larger than those of squared returns, r^2. A similar phenomena is observed by Ding et al. (1993) who examined daily returns of the S&P 500 index and conclude that, for this particular series, the autocorrelations of absolute returns raised to the power of θ are maximized when θ is around 1, that is, the largest autocorrelations are found in the absolute returns. Granger and Ding (1995) denote this empirical property of financial returns as <i>Taylor Effect</i>. Therefore, if r<sub>t</sub>, t = 1,...T, is the series of returns and ρ<sub>θ</sub>(k) denotes the sample autocorrelation of order k of r<sub>t</sub><sup>θ</sup>, θ > 0, the Taylor effect can be defined as follows:
<p align="center">ρ<sub>1</sub>(k) > ρ<sub>θ</sub>(k) for any θ ≠ 1.</p>
However, Granger and Ding (1994, 1996) analyze several series of daily exchange rates and individual stock prices, and conclude that the maximum autocorrelation is not always obtained when θ = 1 but for smaller values of θ. Nevertheless, they point out that the autocorrelations of absolute returns are always larger than the autocorrelations of squares. [1] This can also be observed when looking at <a href="http://www.itp.phys.ethz.ch/econophysics/R/data/textbooks/Wuertz/data/usdchf30min.csv">USDCHF High Frequency FX rates</a> (19960401 00:00:00 to 20010330 23:30:00; 62,496 observations):<br />
<img title="" height="480" alt="usdchfhist" width="496" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/usdchfhist.gif" /><br />
<b>teffectPlot</b>, k = 1,...,10:<br />
<img title="" height="517" alt="taylor_effect_rmetrics" width="497" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/taylor_effect_rmetrics.gif" /><br />
<b>Scaling Law</b>: Some financial time series show a selfsimilar behavior under temporal aggregation. The 'empirical scaling law' relates the average of the unconditional volatility, measured as the absolute value of the return, r(t<sub>i</sub>), over a time interval to the size of the time interval:<br />
<img title="" height="79" alt="scalinglawformula" width="232" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/scalinglawformula.gif" /><br />
where the drift exponent 1/E is an estimated constant that Müller et al. (1990) find to be similar across different currencies and ΔT is a time constant that depends on the currency [2]. The <a href="http://mahalanobis.twoday.net/stories/210704/">Wiener process</a>, a continuous Gaussian random walk, exhibits a scaling law with a drift exponent of 0.5 (slope of green line). The estimated drift component for the USDCHF series is 0.52, which is actually not statistically different from 0.5:<br />
<img title="" height="518" alt="scalinglaw" width="487" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/scalinglaw.gif" /><br />
For more information see <a href="http://ideas.repec.org/p/wop/olaswp/_009.html">Fractals and Intrinsic Time  A Challenge to Econometricians</a>.<br />
<br />
<a href="http://www.itp.phys.ethz.ch/econophysics/R/">Here</a> is the official Rmetrics site.<br />
<br />
[1]: see <a href="http://docubib.uc3m.es/WORKINGPAPERS/WS/ws046315.pdf">Stochastic Volatility Models and the Taylor Effect</a>, Alberto MoraGalán and Ana Pérez and Esther Ruiz<br />
[2] see <a href="http://www.rotman.utoronto.ca/~tmccurdy/papers/edmc.pdf">The Impact of News on Foreign Exchange Rates: Evidence from High Frequency Data</a>, Dirk Eddelbuettel and Thomas H. McCurdy
Mahalanobis
mathstat
Copyright © 2006 Mahalanobis
20061021T21:19:00Z

Neat Proofs
http://mahalanobis.twoday.net/stories/2796923/
<img title="" height="848" alt="neatproofs" width="475" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/neatproofs.gif" /><br />
Addendum:<br />
<img title="" height="337" alt="geo03" width="442" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/geo03.gif" /><br />
by Rick Mabry <br />
<a href="http://links.jstor.org/sici?sici=0025570X%28199902%2972%3A1%3C63%3APWW%3C%3E2.0.CO%3B2O">Mathematics Magazine, Vol. 72, No. 1. (Feb., 1999), p. 63.</a>
Mahalanobis
mathstat
Copyright © 2006 Mahalanobis
20061012T22:37:00Z

Garchinmean :: Recursive Estimates
http://mahalanobis.twoday.net/stories/2684143/
Finance theory suggests that an asset with a higher perceived risk would pay a higher return on average [<a href="http://www.techcentralstation.com/080305G.html">Caution</a>]. For example, let r<sub>t</sub> denote the ex post rate of return on some asset minus the return on a safe alternative asset. Suppose that r<sub>t</sub> is decomposed into a component anticipated by investors at date t1 (denoted μ<sub>t</sub>) and a component that was unanticipated (denoted u<sub>t</sub>):<br />
<img title="" height="38" alt="garchblog" width="302" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/garchblog.gif" />Then the theory suggests that the mean return (μ<sub>t</sub>) would be related to the variance of the return. The GARCH(1,1)inmean, or GARCH(1,1)M, regression model is characterized by<br />
<img title="" height="145" alt="garchblogg" width="302" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/garchblogg.gif" /><br />
for ε i.i.d. with zero mean and unit variance. The effect that higher perceived variability of u<sub>t</sub> has on the level of r<sub>t</sub> is captured by the parameter <font color="green">γ</font> (see Hamilton's <a href="http://www.amazon.com/TimeAnalysisJamesDouglasHamilton/dp/0691042896/sr=81/qid=1158536684/ref=pd_bbs_1/00246818708859228?ie=UTF8&s=books">TSA Bible</a>).<br />
<br />
A realization (n = 500) of a Garch(1,1)M process with κ = 0.005, γ = 2, ω = 0.0001, α = 0.1, β = 0.8, and ε ~ N(0,1) looks as follows:<br />
<img title="" height="411" alt="examplegarchm" width="477" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/examplegarchm.gif" /><br />
Unfortunately, I got a GARCHinmean effect (γ) of <b>34</b> instead of <b>+2</b> after I estimated the process given above (n=500) with R (<a href="http://www.rproject.org">R</a> uses the <a href="http://www.doornik.com">Ox</a> package with "garchOxFit" command to estimate GARCH models. See <a href="http://gsbwww.uchicago.edu/fac/ruey.tsay/teaching/bs41202/G@RCH_info.txt">here </a>for "garchOxFit" installation instructions for Windows OS.) Estimating the GARCH(1,1)M coefficient for n=500, 1000, 2000, 3000, ..., 10000 yields the following result:<br />
<img title="" height="465" alt="garchblog02" width="495" align="center" class="center" src="http://static.twoday.net/mahalanobis/images/garchblog02.gif" /><br />
This would mean that you shouldn't even think of estimating such a model if you don't have at least 4000 observations. So far I haven't seen any applied work where people used more than 1000 observations. How cool is that? Suggestions welcomed.
Mahalanobis
mathstat
Copyright © 2006 Mahalanobis
20060917T22:16:00Z

Helmeted cyclists in more peril on the road ?
http://mahalanobis.twoday.net/stories/2663621/
<b>timesonline</b>: Cyclists who wear helmets are more likely to be knocked off their bicycles than those who do not, according to research.<br />
<br />
Motorists give helmeted cyclists less leeway than bareheaded riders because they assume that they are more proficient. They give a wider berth to those they think do not look like proper cyclists, including women, than to kittedout lycraclad warriors.<br />
<br />
Ian Walker, a traffic psychologist, was hit by a bus and a truck while recording 2,500 overtaking manoeuvres. On both occasions he was wearing a helmet.<br />
<br />
During his research he measured the exact distance of passing traffic using a computer and sensor fitted to his bicycle.Half the time Dr Walker, of the University of Bath, was bareheaded. For the other half he wore a helmet and has the bruises to prove it.<br />
<br />
He even wore a wig on some of his trips to see if drivers gave him more room if they thought he was a woman. They did.<br />
<br />
He was unsure whether the protection of a helmet justified the higher risk of having a collision. We know helmets are useful in lowspeed falls, and so are definitely good for children.<br />
<br />
On average, drivers steered an extra 3.3 in away from those without helmets to those wearing the safety hats. Motorists were twice as likely to pass very close to the cyclist if he was wearing a helmet.<br />
<br />
For an excellent discussion, see <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2006/09/should_you_wear.html">here</a>. <a href="http://static.twoday.net/mahalanobis/images/geiles_fahrrad.jpg">Here </a>is a nice picture of a bike.
Mahalanobis
mathstat
Copyright © 2006 Mahalanobis
20060913T12:24:00Z
find
Search this site:
q
http://mahalanobis.twoday.net/search