This is another symbol for a derivative. You can read it as “The derivative of y with respect to x.” Y is equivalent to f(x), as y is a function of x itself.
Both of these symbols represent the second derivative of the function, which means you take the derivative of the first derivative of the function. You would read it simply as “The second derivative of f of x.”
These symbols represent the nth derivative of f(x). Much like the second derivative, you would perform differentiation on the formula for n successive times. It reads as “The nth derivative of f of x.” If n were 4, it would be “The fourth derivative of x,” for example.
This symbol represents integration of the function. Integration of a function is the opposite of the differentiation. The variables a and b represent the lower limit and upper limit of the section of the graph the integral is being applied to. If there are no values for a and b, it represents the entire function. You would read it as “The integral of f of x with respect to x (over the domain of a to b.)”
A difference, or change, in a quantity: For example , where we say “delta x” we mean how much x changes. You will often come across delta in this context when working with values that characteristically change, such as velocity or acceleration. We also see this meaning when working with slope; The slope is the ratio of the vertical and horizontal changes between two points on a line. You’ll see the use of upper-case delta in the formula for slope: Slope = rise / run = Δy/Δx.
Lower-case δ is used when calculating limits. The epsilon-delta definition of a limit is a precise method of evaluating the limit of a function. Epsilon (ε) in calculus terms means a very small, positive number. The epsilon-delta definition tells us that:
Where f(x) is a function defined on an interval around x0, the limit of f(x) as x approaches x0 is L. For every ε > 0 there exists δ > 0 such that for all x:
This definition is particularly useful; It makes sure that values returned by the function f(x) are as close to the limit as possible by only using points in a small interval around x0. It gives us a useful measure regardless of how close to L we wish f(x) to be.
Case studies are in-depth studies of a phenomenon, like a person, group, or situation. The phenomenon is studied in detail, cases are analyzed and solutions or interpretations are presented. It can provide a deeper understanding of a complex topic or assist a person in gaining experience about a certain historical situation. Although case studies are used across a wide variety of disciplines, they are more frequently found in the social sciences.
Case studies are a type of qualitative research. This method does not involve statistical hypothesis testing. It has been criticized as being unreliable, too general, and open to bias. To avoid some of these problems, studies should be carefully planned and implemented. The University of Texas suggests the following six steps for case studies to ensure the best possible outcome:
Choose the cases and state how data is to be gathered and which techniques for analysis you’ll be using. Well designed studies consider all available options for cases and for ways to analyze those cases. Multiple sources and data analysis methods are recommended.
Prepare to collect the data. Consider how you will deal with large sets of data in order to avoid becoming overwhelmed once the study is underway. You should formulate good questions and anticipate how you will interpret answers. Multiple collection methods will strengthen the study.
Text books are including more real-life studies to veer away from the “clean” data sets that are found in traditional books. These data sets do little to prepare students for applying statistical concepts to their ultimate careers in industry or the social sciences.
Censoring in a study is when there is incomplete information about a study participant, observation or value of a measurement. In clinical trials, it’s when the event doesn’t happen while the subject is being monitored or because they drop out of the trial.
Right censoring (sometimes called point censoring) happens when the subject leaves the study before it’s finished (“loss-to-follow-up”) or when the event you’re interested in doesn’t happen during the course of the study (“end-of-study”).
For example, in a 13-week clinical trial for pain relief, as many as 35% of patients failed to complete the study because of side effects from the medication or lack of relief from the placebo (AMSTAT). In general, drop outs from trials are very common: a 2010 report by the National Academy of Sciences states that patient dropout rates can sometimes exceed 30%.
If the event of interest (i.e. death, cure or other event) doesn’t happen during the course of study, the event is censored and given an event time of (t,∞) where t is the time of the end of the study.
Left censoring is when the subject was at risk for the event being studied before the start of the study. It’s not very common for this to be a factor. If it does happen, it’s usually not an issue for clinical trials as the starting point of the trial may be the occurrence of a particular treatment or the development of a disease.
When an entire study group has already experienced the event of interest, it’s called right truncating. For example, you might study groups of individuals who are admitted to the hospital post-stroke. If patients in the study are all high-risk, but haven’t yet experienced the event, then it’s called left truncating. Life insurance policies are examples of left-truncation; people enter into a policy and have the event of “death” at some point in time. Truncation is always deliberate and part of a study design, where censoring is random.
Statistics MagicChange of variable is a technique where, by a process of substitution, you can change the variables in an integral to new variables. Typically you would do this in an effort to simplify the problem, or make it easier to understand.
As an example, imagine you wanted to find the roots of a polynomial. You know all about how to solve quadratic polynomials—in fact, you’ve probably memorized a formula for it—but suppose this polynomial was something rather harder, say, a sixth degree polynomial. Suppose it was:
Sixth degree polynomials are not just hard to solve off the bat; often, they’re impossible. However, a change of variables can save the day. Let’s define a new variable, u = x3. Then you can write the equation as :
Of course, you don’t actually want to know values of u; you want x. You can get that by substituting back, and you find the real solutions to the equation are:
It’s not just solving polynomials where a technique like this comes in useful, though. Change of variable is also used in integration, differentiation, and coordinate transformations. When you are using it in Calculus, remember to change the variable every time it occurs to make a meaningful change. For differentiation, you could use the chain rule, for integration, you could use u substitution.
Penn State, Eberly College of Science. Stat 414/415: Probability Theory and Mathematical Statistics. Lesson 22: Functions of ONe Random Variable. Change of Variable Technique. Retrieved from https://newonlinecourses.science.psu.edu/stat414/node/157/ on August 20, 2019
The Clausen function (also called the Clausen Integral) is a transcendental, special function related to the dilogarithm of complex argument. It is widely used in experimental and higher-dimensional mathematics, and physics—especially in quantum theory. It’s usefulness stems in part because many indefinite integrals of trigonometric functions and logarithmic functions can be expressed in closed form with Clausen functions.
The Clausen function is intimately connected with various other functions including the polygamma function, Dirichlet eta function and the Riemann zeta function. The Lobachevsky function is basically the same function with a change of variable.
While this particular form is often called “the” Clausen function, other forms of the integral do exist. For example, Junesang (2016) formulated a new definite integral formula for by using a known relationship between the Clausen function and the generalized Zeta function.
Abramowitz, M. and Stegun, I. A. (Eds.). “Clausen’s Integral and Related Summations” §27.8 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, pp. 1005-1006, 1972.
The Cochran-Mantel-Haenszel (CMH) Test is a test of association for data from different sources, or from stratified data from one source. It is a generalization of the McNemar test, suitable for any experimental design including case control studies and prospective studies. While the McNemar can only handle pairs of data (i.e. a 2 x 2 contingency table), the CMH can handle analysis of multiple 2 x 2 x k tables from stratified samples. The results from the tables are weighted (i.e. given different levels of importance) according to the size of the sample in each strata. For pairs of data, the results from CMH and McNemar will be the same.
The CMH statistic is particularly useful in clinical trials, where confounding variables cause extra connections between the dependent variable and independent variable. To run the CMH test, the confounding variable is categorized across a series of 2 x 2 tables, each of which represents one aspect of the confounding variable. Each table represents a “clean” connection between the independent and dependent variable — without the confounding variable causing hidden associations. As the test is run on these individual tables and not one combined table, it avoids the spurious associations that happen when you try to collapse the individual tables together — a phenomenon called Simpson’s Paradox (Rao et. al, 2008).
It’s recommended that you use statistical software because the CMH statistic is tedious to calculate by hand; It’s not uncommon to run this test on large numbers of table (over 30 is common), so the calculations can become quite lengthy. In addition, the test is made a little more complicated by the fact that there are different versions of the test. For example, (DiMaggio, 2012) SAS has three versions, Types 1, 2 and 3:
The null hypothesis for the CMH test is that the odds ratio (OR) is equal to one. An odds ratio of exactly 1 means that exposure to property A does not affect the odds of property B. If you get a significant result in this test (i.e. if your test rejects the null hypothesis), then you can conclude there is an association between A and B.
A Cohort study, used in the medical fields and social sciences, is an observational study used to estimate how often disease or life events happen in a certain population. “Life events” might include: incidence rate, relative risk or absolute risk.
The study usually has two groups: exposed and not exposed. If the exposure is rare (for example, exposure to an industrial solvent), then the cohort is called a “special exposure cohort.” Both groups are followed to see who develops a disease and who does not. For example, you could look at cigarette smokers to see who gets breast cancer and who does not. The study would include a group of smokers, and a group of non-smokers.