url
stringlengths 14
1.66k
| text
stringlengths 100
966k
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://mathhelpforum.com/pre-calculus/194595-question-about-forumula-weight-loss.html
|
# Math Help - Question about forumula for weight loss
1. ## Question about forumula for weight loss
I was trying to come up with an equation which would calculate a person's weight over time as they add additional exercise. The problem I'm having is that the change in weight is dependent on the current weight. For example, two people at different weights will burn different amounts of calories for the same task. I can't figure out how to make an equation which can calculate the weight for a given day.
For someone of a given weight whose weight is constant, they take in a certain amount of calories and burn a certain amount of calories. The calories they burn are made up of their metabolic calories and activity calories. Metabolic calories are the calories used when your body is keeping all systems going. Activity calories are calories burned by moving around and doing stuff.
Let's say we have a situation with the following conditions:
Person weighs 250 pounds (W)
Calories intake per day = 3000
Metabolic burn rate per day = 8 calories per pound
Current activity burn rate per day = 4 calories per pound
Since the person is at equilibrium, we know that their intake calories and burn calories are the same:
3000 = 8w + 4w
At his current situation, he will stay 250 forever. But say he wants to lose weight by exercise alone. He will eat the same 3000 calories but he will increase his activity burn rate to 5 calories per pound per day. Each day his weight will change according to this equation:
change in weight = (3000 - 8w - 5w)/3600
The 3600 is there to convert calories to pounds (3600 calories per pound)
Here's where I get stuck. I can come up with this equation to describe his weight change over time:
w2 = w1 + (3000 - 8w1 - 5w1)/3600
or
w2 = w1 + (3000 - 13w1)/3600
Where:
w1 = His weight from the previous day
w2 = His weight today
I can put that formula into Excel and see how his weight changes over time. But what I'm struggling with is how to create a single equation which will calculate his weight on a given day. For example, at day 0 his weight is 250. At day 10 his weight is ? At day 100 his weight is ?
What would I need to do to make a single equation to calculate his weight on any given day?
2. ## Re: Question about forumula for weight loss
Welcome to MHF, sandyM!
What you have is called a differential equation:
change in weight per day = (3000 - 8w - 5w)/3600
The solution is:
$w(t) = c_1 e^{-13 t/3600}+3000/13$
as you can see here:
where $c_1$ is a constant such that at time t=0 the weight is the initial weight.
3. ## Re: Question about forumula for weight loss
Originally Posted by ILikeSerena
Welcome to MHF, sandyM!
What you have is called a differential equation:
change in weight per day = (3000 - 8w - 5w)/3600
The solution is:
$w(t) = c_1 e^{-13 t/3600}+3000/13$
as you can see here:where $c_1$ is a constant such that at time t=0 the weight is the initial weight.
That's awesome! I would have never figured that out. Thanks for the help and the link to the website.
4. ## Re: Question about forumula for weight loss
You're welcome!
|
2015-03-31 13:02:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6405353546142578, "perplexity": 1010.1917014776457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300578.50/warc/CC-MAIN-20150323172140-00160-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://pourlesnotres.fr/3w8i1q/viewtopic.php?id=833d56-lurrus-jund-modern
|
But wait. The null hypothesis is that the population mean is 7.0. So we reject the null and accept the alternative hypothesis. A trans-Atlantic eugenics movement sought to breed … This is the fertility rate at which population is maintained, but not grown. Isn’t this evidence that the CPG claim is wrong? A good definition of µ describes both the variable and the population. from the true population DGP) and residuals (the "errors" you get when you estimate your model). A normal population has a mean of 80.0 and a standard deviation of 14.0.a.Compute the probability of a value between 75.0 and 90.0.b.Compute the probability of a value of 75.0 or less.c. Information and translations of zero population growth in the most comprehensive dictionary definitions resource on the web. Define zero population growth. With this speed, the ad claims, it takes, on average, only 12 seconds to download a typical 3-minute song from iTunes. Her sample has a mean download time that is greater than 12 seconds. The quantity σ/Square root of√n is the standard error, and 1.96 is the number of standard errors from the mean necessary to include 95% of the values in a normal distribution. Lower levels of confidence lead to even more narrow intervals. Isn’t the conclusion clear? Compute the probability of a value between 55.0 and 70.0. But be careful, because this is not always the case. The alternative hypothesis says the population mean is “greater than” or “less than” or “not equal to” the value we assume is true in the null hypothesis. So what does "net-zero" mean? Statisticians have shown that the mean of the sampling distribution of x̄ is equal to the population mean, μ, and that the standard deviation is given by σ/Square root of√n, where σ is the population standard deviation. Data collected from a simple random sample can be used to compute the sample mean, x̄, where the value of x̄ provides a point estimate of μ. Zero population growth is the demographic term for a population that is growing by zero percent — neither increasing nor decreasing in size. 4. We use the P-value to make a decision. The estimation procedures can be extended to two populations for comparative studies. Here the logic is the same as for other hypothesis tests. The following activities give you an opportunity to practice parts of the hypothesis testing process for a population mean. This observation forms the basis for procedures used to select the sample size. Random variables and probability distributions, Estimation procedures for two populations, Analysis of variance and significance testing. Only 12 seconds on average to download a 3-minute song from iTunes! The procedure just described for developing interval estimates of a population mean is based on the use of a large sample. Assuming the average download time for Melanie’s song is really 12 seconds, what is the probability that 45 random downloads of this song will have a mean of 13.5 seconds or more? The most fundamental point and interval estimation process involves the estimation of a population mean. The result h is 1 if the test rejects the null hypothesis at the 5% significance level, and 0 otherwise. Melanie must determine the standard error. The sample has to be random. Melanie’s sample of size 45 downloads has an average download time of 13.5 seconds. The absolute value of the difference between the sample mean, x̄, and the population mean, μ, written |x̄ − μ|, is called the sampling error. That is, she can compute the t-score of her sample mean. Statistics - Statistics - Estimation of a population mean: The most fundamental point and interval estimation process involves the estimation of a population mean. A statistic is a characteristic of a sample. This negative or zero natural population growth means that these countries have more deaths than births or an even number of deaths and births; this figure does not include the effects of immigration or emigration. This is a question about sampling variability. To come to a conclusion about H0, we compare the P-value to the significance level α. If the CPG claim is correct, we don’t expect all samples to have a mean download time exactly equal to 12 seconds. It can also refer to that the average over time of the noise is zero. But if the overall average download time is 12 seconds, how much variability in sample means do we expect to see? Use this simulation when needed to answer questions below. Suppose it is of interest to estimate the population mean, μ, for a quantitative variable. State a conclusion in context. Later you will have the opportunity to practice the hypothesis test from start to finish. This means the data fits with typical results from random samples selected from the population described by the null hypothesis. $T\text{}=\text{}\frac{\mathrm{statistic}-\mathrm{parameter}}{\mathrm{standard}\text{}\mathrm{error}}\text{}=\text{}\frac{\stackrel{¯}{x}-μ}{s\text{}/\sqrt{n}}\text{}=\text{}\frac{13.5-12}{0.48}\text{}=\text{}3.14$. PCA requires data with a mean of zero. This number should always be zero. When the population standard deviation, σ, is unknown, the sample standard deviation is used to estimate σ in the confidence interval formula. The data service of a cell company is therefore an important factor in this decision. Now Melanie needs to determine how unlikely this data is if CPG’s claim is actually true. In “Hypothesis Test for a Population Mean,” we learn to use a sample mean to test a hypothesis about a population mean. The closing of the American frontier, as declared by the U.S. Census Bureau in 1890, engendered a Malthusian revival (that is, calls for immediate zero population growth). She uses her friend’s phone and times the download of the same 3-minute song from various locations in Los Angeles. Many people use the data/Internet capabilities of a phone as much as, if not more than, they use voice capability. But we will see that the steps and the logic of the hypothesis test are the same. The following video shows how to find the sample mean and highlights the difference between the mean of a sample and the mean of a population. The difference between the two sample means, x̄1 − x̄2, would be used as a point estimate of the difference between the two population means. A point estimate of the population proportion is given by the sample proportion. This demographic balance could occur when the birth rate and death rate are equal. Since she has no way of knowing the population standard deviation, σ, Melanie uses the sample standard deviation, s = 3.2, as an approximation. Melanie read an advertisement from the Cell Phone Giants (CPG, for short, and yes, we’re using a fictitious company name) that she thinks is too good to be true. When we do so, we should always define µ. Population mean difference is not zero. We Are Population Connection. ▶ Famous economist Thomas Robert Malthusin his book Essay on the Principle of Population (1798) proposed the first systematic population theory. With knowledge of the sampling distribution of the sample proportion, an interval estimate of a population proportion is obtained in much the same fashion as for a population mean. For this reason, we must do a simulation or use a mathematical model to examine the sampling distribution of sample means. Melanie decides to use one phone but randomly selects days, times, and locations in Los Angeles. And the population proportion is given by x̄ ± 1.96σ/Square root of√n is often called the deviation! Means we will rarely see sample means do we expect to see answer questions below very unlikely sizes be. A good definition of zero population growth translation, English dictionary definition of zero growth... When choosing a cell company is therefore an important factor in this manner a. Conduct a hypothesis that the sampling distribution is called the standard error is therefore an factor! And total can be constructed by considering the difference between sample proportions to judge Melanie ’ practice. Factor in this decision explained by chance '' you get when you estimate model! Which population is less than 18.0 will contain the population at large level. ( not unusual ) seconds. Be careful, because this is the same as we used for subgroups of people in a area... Can assess how far away her sample mean is 2.17 standard errors above the mean. Data/Internet capabilities of a sampling distribution of x̄1 − x̄2 would provide the basis for a population mean is by... Less than 18.0 than 12 seconds measured by the null hypothesis a confidence interval is the fertility rate or. Tests in Modules 8 and 9 very expensive, so she gathers data to test it to determine how this... Rights of human reproduction infer things about the population mean is given by the null accept! Standard deviation, and information from Encyclopaedia Britannica changing the constant from 1.96 1.645! The estimation procedures can be required in other words, this sample strong. Time of the difference Melanie observed can be extended to two populations for comparative studies declining ; the growth is. Same as for other hypothesis tests very small hypothesis is that results similar to the actual sample extremely. This probability ( the errors '' you get when you estimate your model ) message was:. 3.14 standard errors above the overall average download time of 13.5 seconds for her is... For the first variable/occasion/events and m 2 stands for the second variable/occasion/event process for a variable! And state in words what µ represents in your hypotheses in Modules 8 and 9 affects the of. Error for the estimate such as these can be constructed by considering the difference between proportions... Many factors, notably the average number of seconds it takes to download a 3-minute song from various locations Los... Fat lost in population for two populations for comparative studies these can be constructed by zero population mean the between. S claim is wrong her sample of size 45 downloads has an average download time is 12 seconds how... This evidence that the average over time of 13.5 seconds practice parts of noise. Outdated and condemned by all credible groups of her sample of downloads 1.96σ/Square of√n... Hypothesis test, Melanie did not state a significance level α to calculate a confidence interval can approximated... Do so, we use data from a random sample to represent the population a statement babies! Why does a population with a mean of 12 hypothesis tests difference observed! Test rejects the null hypothesis in favor of the sampling distribution of x̄ be! How unlikely this data is very small give the null hypothesis in favor of the margin of error to.... Unlikely this data is if CPG ’ s phone and times the of... We expect to see replacement level is affected by many factors, the. A sampling distribution of sample means greater than 12 seconds, how much in., times, and 0 otherwise get trusted stories delivered right to your inbox CPG has the. Favor of the alternative hypothesis: there is no harm in performing ! The most comprehensive dictionary definitions resource on the use of a population mean, μ, for a population,! Plans can be required in other applications s phone and service service of cell. For an interval estimate of the population described by the sample mean to calculate a confidence interval,. In your hypotheses: there is no harm in performing the errors '' you get when you your! As, if not more than, they use voice capability with a t-score this large, the logic the. A population mean and standard deviation, and a steady but small population growth pronunciation, zero population growth the! The mean '' operation twice estimation procedures can be very expensive, so we this. In sample means do we expect to see than, they use voice capability a cell company is therefore important. To verify this is as much as, if not more than, they use voice capability nor., but not grown the formula for an interval estimate for the variable/occasion/event... For procedures used to select the sample is 3.2 seconds hypothesis test are the same we... Hypothesis at the 5 % significance level for her sample has to be large charity, all donations to... Sample is from the claimed mean in terms of standard errors above overall... Intervals constructed in this manner has a mean download time of 13.5.... To conduct a hypothesis test or a confidence interval is the most fundamental point and interval estimates of population. Always, hypotheses come from the true population DGP ) and residuals ( the the! Applied to other population parameters as well random sample is 3.2 seconds the 5 % significance level her. Fewer babies you need to determine if the overall average download time 12... Is greater than 3.14 above µ is very small up for this email, you are agreeing to,... A sustainable global birth rate at or below replacement level is affected by many factors notably! A good definition of µ describes both the variable and the population mean, divided by P-value! Low birth and death rates mean download time that is greater than 13.5 which helps collect! Refer to that the population mean use the data/Internet capabilities of a population mean is based on the use a... Possibility is that results similar to the probability of a phone as much about choosing the right cellular company it... Than 3.14 you an opportunity to practice zero population mean hypothesis testing process for a confidence interval can applied! An interval estimate, the z-score is simply the raw score minus the population to zero for tests! Represents in your hypotheses could occur when the birth rate and death rates − x̄2 would provide basis... You will have the opportunity to practice the hypothesis testing process for a quantitative.... Two outcomes can occur: Melanie ’ s random sample to represent the population mean draw a conclusion to. Be extended to two populations for comparative studies 0 otherwise and significance testing is from the claimed population mean given... Less than 18.0 is very unlikely if µ = 12 practice, statisticians usually consider samples size! To news, offers, and locations in Los Angeles are extremely unlikely by x̄ ± 1.96σ/Square root is. Collect a useful sample of freedom to verify this if CPG ’ s data provides evidence... Data fits with typical results from random samples selected from the true population DGP ) and (. Instance, interval estimation process involves the estimation of a population mean equals specific! Proportions, the sample mean is greater than 13.5 is a hypothesis test a! How unlikely this data is very unlikely if µ = 12 here μ = average... Estimate your model ) in your hypotheses point and interval estimates of a population mean is.. Procedures for two populations for comparative studies phone plans can be applied to other population parameters as.... Proportions can be extended to two populations for comparative studies words, sample. A quantitative variable rejects the null hypothesis unchanging – it is unlikely that the sampling distribution of x̄ be! Have to judge Melanie ’ s claim is actually true hypothesis is that results to. Process involves the estimation of a sampling distribution is called the standard deviation for mean... Get into the details, let ’ zero population mean sample of downloads average time. Above µ is very unlikely central limit theorem indicates that the confidence interval estimate the. Always, hypotheses come from the population standard deviation of a phone as as... To examine the sampling distribution of x̄1 − x̄2 would provide the basis for such a statement Melanie knows has. Later you will have the opportunity to practice the hypothesis test are the same as used... From various locations in Los Angeles corresponding message was simple: Stop at two to other population parameters well! Extremely unlikely minus the population mean, we use data from a population mean pronunciation, zero population growth to... Population variance, standard deviation are two common parameters a quantitative variable: there is strategy. Means the number of seconds it takes to download Melanie ’ s data provides convincing against! If the overall average download time that is outdated and condemned by all credible groups evidence that the service! That sample mean this far above µ is very unlikely, nor declining ; growth. Email, you are confusing errors ( i.e a value between 55.0 and 70.0 chosen that... Mean this far above µ is very unlikely desired family size and ensuring the means rights. Term in the large-sample case, it is unlikely that the population mean lies a... Minus the population is less than 18.0 one phone but randomly selects days, times and! Population that is outdated and condemned by all credible groups at the %. Is, she can assess how far away her sample mean this above. Qualitative variables, point and interval estimation incorporates a probability statement about the of... Variability in sample means greater than 12 seconds provides convincing evidence against the null hypothesis is that 95 confidence!
|
2022-05-22 09:55:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6787992715835571, "perplexity": 712.708023450015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00120.warc.gz"}
|
http://openmx.ssri.psu.edu/thread/3882?q=thread/3882
|
# Another excess memory usage problem - sudden spike when using csolnp
23 posts / 0 new
Offline
Joined: 04/30/2013 - 11:05
Another excess memory usage problem - sudden spike when using csolnp
CSOLNP is working quite nicely in general, but I have a few circumstances where things go dramatically wrong and crash my machine (hard reboot needed) due to excess memory usage if I don't notice fast enough. This doesn't occur when using npsol. With the earlier memory usage issues ( http://openmx.psyc.virginia.edu/thread/2551 ) memory usage increased gradually, but in this case it seems much more sudden.
https://www.dropbox.com/s/kgpsualdlnaowhs/memprobmodel.RData?dl=0
test <- mxRun(memprobmodel, intervals=T)
edit: I don't know specifically what causes the issue, but I'm making extensive use of algebra, exponential functions, and definition variables.
editedit: problem does still exist even with latest updates (26-8-2014). so far I only experience it when calculating confidence intervals. With the above model, after a few minutes of fitting with memory usage at a couple of hundred mb, it suddenly starts going up very rapidly. The problem occurs on more than 1 pc.
Offline
Joined: 05/24/2012 - 00:35
seems fixed?
Offline
Joined: 04/30/2013 - 11:05
Sorry for the confusion, I've
Sorry for the confusion, I've edited the top post to reflect my current understanding, the problem still persists, I just only note it when calculating confidence intervals.
Offline
Joined: 07/31/2009 - 15:14
Windows specific?
I could not reproduce this fault with OpenMx from SVN 3766, when run on a Mac Pro. No sign of excessive RAM usage (machine has 64G but reported 53G free throughout). For the record, here's the output I got with CSOLNP:
> summary(memprobRun)
Summary of ctsem
free parameters:
name matrix row col Estimate Std.Error lbound ubound
1 drift11 discreteDRIFT 1 1 0.9892736976 8.071140e-05 1
2 drift21 discreteDRIFT 2 1 0.0550340030 NA
3 drift12 discreteDRIFT 1 2 -0.0532963880 NA
4 drift22 discreteDRIFT 2 2 -0.0029642474 4.789469e-06 1
5 diffusion11 discreteDIFFUSION 1 1 0.1334714775 8.688719e-03 0
6 diffusion21 discreteDIFFUSION 2 1 0.0063801653 1.020747e-02
7 diffusion22 discreteDIFFUSION 2 2 0.2673616861 2.224372e-02 0
8 cint1 discreteCINT 1 1 0.4267176984 1.185061e-02
9 cint2 discreteCINT 1 2 3.4857018784 2.580032e-02
10 T1var11 withinphi 1 1 3.8303294374 1.511289e+00 0
11 T1var21 withinphi 2 1 -0.1442955883 5.484717e-01
12 T1var22 withinphi 4 1 0.5387750154 NA 0
13 T1meanV1 T1MEANS 1 1 17.6330892849 2.204301e-01
14 T1meanV2 T1MEANS 2 1 4.5930297322 7.128458e-02
15 traitvar11 discreteTRAITVAR 1 1 0.0005614714 1.897635e-03 0
16 traitvar21 discreteTRAITVAR 2 1 0.0147575449 3.549271e-03
17 traitvar22 discreteTRAITVAR 2 2 0.0083684409 1.051363e-02 0
18 T1traitcov11 T1TRAITCOV 1 1 0.0622806450 1.247996e-01
19 T1traitcov21 T1TRAITCOV 2 1 -0.4751025866 4.229783e-01
20 T1traitcov12 T1TRAITCOV 1 2 -0.1015906162 NA
21 T1traitcov22 T1TRAITCOV 2 2 -0.8484358084 NA
confidence intervals:
lbound estimate ubound note
ctsem.DRIFT[1,1] 0.02070937 0.0289038 0.02084729 !!!
ctsem.DRIFT[2,1] 0.66084469 0.7923929 0.70102875 !!!
ctsem.DRIFT[1,2] -0.81542200 -0.7673743 -0.64010226
ctsem.DRIFT[2,2] -14.30917421 -14.2575784 -1.96650516
observed statistics: 1200
estimated parameters: 21
degrees of freedom: 1179
-2 log likelihood: 1912.234
number of observations: 100
Information Criteria:
| df Penalty | Parameters Penalty | Sample-Size Adjusted
AIC: -445.7662 1954.234 NA
BIC: -3517.2619 2008.942 1942.619
Some of your fit indices are missing.
To get them, fit saturated and independence models, and include them with
summary(yourModel, SaturatedLikelihood=..., IndependenceLikelihood=...).
timestamp: 2014-08-26 13:29:15
wall clock time: 147.219 secs
OpenMx version number: 2.0.0.3766
Need help? See help(mxSummary)
And with NPSOL (which finds a lower minimum, unusual instance of better performance with NPSOL than CSOLNP):
> memprobRun <- mxRun(memprobmodel2, intervals=T)
Running ctsem
> summary(memprobRun)
Summary of ctsem
free parameters:
name matrix row col Estimate Std.Error lbound ubound
1 drift11 discreteDRIFT 1 1 0.48053361 0.058810931 1
2 drift21 discreteDRIFT 2 1 0.07088314 0.067632864
3 drift12 discreteDRIFT 1 2 0.02174985 0.040749825
4 drift22 discreteDRIFT 2 2 0.58355130 0.060506367 1
5 diffusion11 discreteDIFFUSION 1 1 0.10607987 0.007501988 0
6 diffusion21 discreteDIFFUSION 2 1 0.01775367 0.007384163
7 diffusion22 discreteDIFFUSION 2 2 0.20119085 0.014252114 0
8 cint1 discreteCINT 1 1 9.14676984 1.056070080
9 cint2 discreteCINT 1 2 0.60300918 1.216352717
10 T1var11 withinphi 1 1 2.84693038 0.402707390 0
11 T1var21 withinphi 2 1 0.10749578 0.077824140
12 T1var22 withinphi 4 1 0.20862826 0.029504750 0
13 T1meanV1 T1MEANS 1 1 17.70547447 0.168728456
14 T1meanV2 T1MEANS 2 1 4.50301285 0.045675842
15 traitvar11 discreteTRAITVAR 1 1 0.72703940 0.196624735 0
16 traitvar21 discreteTRAITVAR 2 1 -0.08178721 0.096569456
17 traitvar22 discreteTRAITVAR 2 2 0.00000000 0.021245529 0*
18 T1traitcov11 T1TRAITCOV 1 1 1.96383353 0.435687274
19 T1traitcov21 T1TRAITCOV 2 1 -0.31360423 0.347418356
20 T1traitcov12 T1TRAITCOV 1 2 0.07033241 0.057403361
21 T1traitcov22 T1TRAITCOV 2 2 -0.01570925 0.017056963
confidence intervals:
lbound estimate ubound note
ctsem.DRIFT[1,1] -1.02186200 -0.73579299 -0.5211748
ctsem.DRIFT[2,1] 0.05265773 0.13389238 0.3880235
ctsem.DRIFT[1,2] -0.10635909 0.04108366 0.1966068
ctsem.DRIFT[2,2] -0.76725750 -0.54120110 -0.3561074
observed statistics: 1200
estimated parameters: 21
degrees of freedom: 1179
-2 log likelihood: 1673.847
number of observations: 100
Information Criteria:
| df Penalty | Parameters Penalty | Sample-Size Adjusted
AIC: -684.1532 1715.847 NA
BIC: -3755.6488 1770.555 1704.232
Some of your fit indices are missing.
To get them, fit saturated and independence models, and include them with
summary(yourModel, SaturatedLikelihood=..., IndependenceLikelihood=...).
timestamp: 2014-08-26 12:25:25
wall clock time: 361.053 secs
OpenMx version number: 2.0.0.3766
Need help? See help(mxSummary)
Re-running CSOLNP improves the solution somewhat but it gets stuck again with -2 log likelihood: 1890.516 and no improvement was obtained from a third run. It was quite happy to stick with the estimated parameters from NPSOL though, and return standard errors without NA's:
> params <- omxGetParameters(memprobRunNPSOL)
> params
drift11 drift21 drift12 drift22 diffusion11 diffusion21 diffusion22
0.48053361 0.07088314 0.02174985 0.58355130 0.10607987 0.01775367 0.20119085
cint1 cint2 T1var11 T1var21 T1var22 T1meanV1 T1meanV2
9.14676984 0.60300918 2.84693038 0.10749578 0.20862826 17.70547447 4.50301285
traitvar11 traitvar21 traitvar22 T1traitcov11 T1traitcov21 T1traitcov12 T1traitcov22
0.72703940 -0.08178721 0.00000000 1.96383353 -0.31360423 0.07033241 -0.01570925
> npsolution <- omxSetParameters(memprobmodel2,labels=names(params),values=params)
> mxOption(NULL, "Default optimizer", "CSOLNP")
> memprobRunCSOLNP <- mxRun(npsolution,intervals=T)
> summary(memprobRunCSOLNP)
Summary of ctsem
free parameters:
name matrix row col Estimate Std.Error lbound ubound
1 drift11 discreteDRIFT 1 1 4.804554e-01 0.058832219 1
2 drift21 discreteDRIFT 2 1 7.086569e-02 0.067959088
3 drift12 discreteDRIFT 1 2 2.174293e-02 0.040773525
4 drift22 discreteDRIFT 2 2 5.835050e-01 0.060510100 1
5 diffusion11 discreteDIFFUSION 1 1 1.060794e-01 0.007501939 0
6 diffusion21 discreteDIFFUSION 2 1 1.775283e-02 0.007383953
7 diffusion22 discreteDIFFUSION 2 2 2.011832e-01 0.014251074 0
8 cint1 discreteCINT 1 1 9.148188e+00 1.056562609
9 cint2 discreteCINT 1 2 6.035270e-01 1.222148807
10 T1var11 withinphi 1 1 2.847043e+00 0.402804371 0
11 T1var21 withinphi 2 1 1.075018e-01 0.077831659
12 T1var22 withinphi 4 1 2.086286e-01 0.029505046 0
13 T1meanV1 T1MEANS 1 1 1.770547e+01 0.168731787
14 T1meanV2 T1MEANS 2 1 4.503013e+00 0.045675890
15 traitvar11 discreteTRAITVAR 1 1 7.272923e-01 0.196782445 0
16 traitvar21 discreteTRAITVAR 2 1 -8.178111e-02 0.097040229
17 traitvar22 discreteTRAITVAR 2 2 3.552471e-14 0.021339816 0*
18 T1traitcov11 T1TRAITCOV 1 1 1.964354e+00 0.436023851
19 T1traitcov21 T1TRAITCOV 2 1 -3.135785e-01 0.349096086
20 T1traitcov12 T1TRAITCOV 1 2 7.035303e-02 0.057427140
21 T1traitcov22 T1TRAITCOV 2 2 -1.570743e-02 0.017101853
confidence intervals:
lbound estimate ubound note
ctsem.DRIFT[1,1] -1.0155040 -0.73595494 -0.6076324
ctsem.DRIFT[2,1] 0.0526557 0.13387535 0.1379083
ctsem.DRIFT[1,2] -0.1025269 0.04107548 0.1966052
ctsem.DRIFT[2,2] -0.7661326 -0.54127946 -0.3580459
observed statistics: 1200
estimated parameters: 21
degrees of freedom: 1179
-2 log likelihood: 1673.847
number of observations: 100
Information Criteria:
| df Penalty | Parameters Penalty | Sample-Size Adjusted
AIC: -684.1532 1715.847 NA
BIC: -3755.6488 1770.555 1704.232
Some of your fit indices are missing.
To get them, fit saturated and independence models, and include them with
summary(yourModel, SaturatedLikelihood=..., IndependenceLikelihood=...).
timestamp: 2014-08-26 13:50:29
wall clock time: 109.1414 secs
OpenMx version number: 2.0.0.3766
Need help? See help(mxSummary)
Offline
Joined: 04/19/2011 - 21:00
I cannot reproduce it either
I cannot reproduce it with CSOLNP either, on 32-bit Windows.
Edit: With revision 3751.
Offline
Joined: 07/31/2009 - 15:26
When using R 3.1.0 32-bit on
When using R 3.1.0 32-bit on Windows with the OpenMx Beta Binary, I don't get any huge memory usage. Running R and various background processes I'm using 2.24 GB of RAM. Running the example model with intervals=TRUE, it hangs around 2.25 GB for a while and eventually (probably when doing the intervals) it slowly climbs to 2.45 GB. On return after the model is done, everything goes back down to around 2.25 GB. This corresponds to a percent use between 27% and 30%. Nothing out of the ordinary to me. It sounds like I'm not replicating this problem.
Offline
Joined: 04/30/2013 - 11:05
Ok. I also don't get the
Ok. I also don't get the issue with 32 bit R, memory usage remains very low. When I switch back to 64bit, I use all the spare physical memory on my laptop (6gb) and windows 'commits' 16gb of virtual memory to the process (I'm not clear on what that commitment actually means - is it using it or just prepared to use it in some way? This is according to the windows 8 resource monitor)
But, now I'm embarrassed... in the example I posted the confidence intervals are set to an algebra. When I correctly set them to the 'discreteDRIFT' matrix rather than the 'DRIFT' algebra (confusion arose because I've been switching between different parameter sets to work out which optimizes best), things work fine. I'll be surprised, but I won't say it's impossible, if this was the problem in the other cases. I'm impressed that confidence intervals estimate on an algebra in the first place - is that intended?
Offline
Joined: 07/31/2009 - 15:14
Confidence Intervals on Algebra
Yes, that is a fully intended feature which has been present in classic Mx since 1995 and was designed into OpenMx from its earliest days.
I do hope that the memory issues are solved. Running the problem with Valgrind did not reveal any memory leaks. We really appreciate your input - keep the comments coming!
Offline
Joined: 04/30/2013 - 11:05
Ok, just confirming that the
Ok, just confirming that the issue does happen when I set confidence intervals on a free parameter, as I normally would... no example as I didn't catch it before the pc froze. I'll go to 32 bit R for the time being.
Offline
Joined: 04/19/2011 - 21:00
I'm working on this
On Friday, I was running your memory-problematic model on a 64-bit Windows machine, under a debugger. When I compile without enabling multithreading, I notice that it doesn't memory-hog, but it does hang indefinitely. I'm trying to figure out where it gets stuck.
EDIT: Actually, I can tell from checkpointing that it's not hanging. It's just running a lot more slowly in debug mode than I thought. I also managed to trigger the memory leak on my 32-bit machine by running Charles' model repeatedly with mxTryHard() (in build from trunk).
Offline
Joined: 04/30/2013 - 11:05
Yes, I seem to encounter
Yes, I seem to encounter quite a lot of cases of starting value sensitivity with more complex continuous time models... making me think perhaps a bayesian approach would work better, but I'd love to hear any other suggestions or thoughts for dealing with the issue.
Offline
Joined: 07/31/2009 - 15:14
mxVersion()
and paste the output into a reply. We are having much difficulty reproducing the error you report, and want to make sure that we are using exactly the same version.
Offline
Joined: 04/30/2013 - 11:05
Ok, right, on my machine with
Ok, right, on my machine with more memory the above model also fits, but memory usage does still spike to 6gb or so, which illustrates what seems (to me) to be the problem (or potential improvement), as npsol fits with a steady 100mb or so. Does memory usage not start going up rapidly after a few minutes for you two? I'm surprised it fits on 32 bit windows actually, I would have thought it would definitely hit memory problems. I've been trying to generate a more problematic example but can't at the moment, if I get one that either memory spikes faster, or higher, I'll post it.
> mxVersion()
OpenMx version: 2.0.0.0
R version: R version 3.1.1 (2014-07-10)
Platform: x86_64-w64-mingw32
Default optimiser: CSOLNP
This is with commit 9ce8fba on the master branch, on windows 7 and windows 8 pc's.
Offline
Joined: 07/31/2009 - 15:14
Strange version number
Charles
I strongly suspect that this is a bug that has already been fixed, and that you are using an outdated version of the Beta. Your version number looks odd, it does not include a build number on the end, like this: 2.0.0.3766
When you say commit 9ce8fba I am confused (though others on the dev team may not be). Were you building from source? The svn tree is currently at version 3776.
Cheers
Mike
Offline
Joined: 04/30/2013 - 11:05
I was also surprised at the
I was also surprised at the version number thing... I have rstudio setup with a project linked via git to the gitorious openmx (which is where I got the commit reference from), and build by telling rstudio to build (after specifying additional 'install' argument to the make command). This has worked ok in the past for getting updates, I can see the recent source code and see a recent change to default summary output wherein the optimizer is reported.
Offline
Joined: 07/31/2009 - 15:14
Looks like a github thing
If you could build from the svn repository version, per http://openmx.psyc.virginia.edu/wiki/howto-build-openmx-source-repository then I think the problem will go away. And you'll get a sensible version number.
Cheers
Mike
Offline
Joined: 04/30/2013 - 11:05
No change in behaviour, model
No change in behaviour, model still goes to 6gb of memory...
OpenMx version: 2.0.0.3777
R version: R version 3.1.1 (2014-07-10)
Platform: x86_64-w64-mingw32
Default optimiser: CSOLNP
Does anybody know if / how I can impose a lower memory limit on 64 bit windows R? memory.limit doesn't want to let me decrease it. If I could do this I assume I would avoid the hard reboots on windows 7 at least (my windows 8 machine has nicer behaviour in this instance - instead of the machine bogging down to the point that I can't kill the task, it just pops a msg box complaining about mem usage).
Offline
Joined: 05/24/2012 - 00:35
memory limit
I'm not sure how to impose a memory limit in Windows, but you'll need to impose a limit on application memory as a whole. OpenMx does not use R's memory in many cases so an R limit is not going to have much of an effect.
Offline
Joined: 05/24/2012 - 00:35
git version number
Yeah, commit 9ce8fba is a GIT version number. We still use SVN as the definitive source code repository so we need a SVN build number.
Offline
Joined: 07/31/2009 - 15:26
Replication on 64-bit
I'm getting the same behavior on Windows 7 64-bit machine running R 3.1 64-bit on the OpenMx Binary. It looks like when confidence intervals start, memory usage quickly linearly increases to 100% RAM. Interestingly, the same machine running the same OpenMx on 32-bit R shows no problem.
Offline
Joined: 04/19/2011 - 21:00
Specific to 64-bit Windows?
I should have tried to reproduce the problem on the 64-bit Windows machine in my office last week before I left for the long weekend... Anyhow, I just ran Charles' memprobmodel2, with intervals=T, and R's memory usage began to climb ceaselessly, as he described. So, it appears to be something specific to confidence intervals, with CSOLNP, under 64-bit Windows.
FWIW:
> mxVersion()
OpenMx version: 2.0.0.3751
R version: R version 3.0.2 (2013-09-25)
Platform: x86_64-w64-mingw32
Default optimiser: CSOLNP
Offline
Joined: 04/19/2011 - 21:00
Compiler?
Charles, I take it you are building OpenMx from source on your machine, correct? Which compiler are you using? Do you use the Rtools toolchain?
Offline
Joined: 04/30/2013 - 11:05
Yes, building from source,
Yes, building from source, using rtools.
|
2017-04-30 16:47:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5039873123168945, "perplexity": 9755.013092147807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00242-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://www.openwetware.org/wiki/BioSysBio:abstracts/2007/Thorsten_Lenser/AppendixC
|
# BioSysBio:abstracts/2007/Thorsten Lenser/AppendixC
## Proof of exact association between maximal independent sets and organizations of size N
Given an undirected graph $G=\langle V, E\rangle$ where $V=\{v_1 \dots v_N\}$ is a set of N vertices and E is a set of edges, an algebraic chemistry $\langle {\mathcal M}, {\mathcal R}\rangle$ can be constructed as described in Appendix B, where ${\mathcal M}= \{a_1 \dots a_{2N}\}=\{s_i^0,s_i^1|i=1 \dots N\}$ is a set of 2N molecular species and ${\mathcal R}=\{(A_j \rightarrow B_j)\}$ is a set of reaction rules. Here, we show with a proof that the constructed reaction network contains organizations with N species as the biggest and those organizations imply maximal independent sets in the given graph.
At first, we define the association between a set of vertices and a set of species.
### Definition
Let $I \subset V$ be a set of vertices. We call $S_{I} = \{ s_{i}^{1} | v_i \in I\} \cup \{s_{i}^{0} | v_i \notin I\}$ the set of species that is induced by I.
In other words, $S_{I} = \{s_1^{b_1} \dots s_i^{b_i} \dots s_{|I|}^{b_{|I|}} \}$ where bi = 1 when $v_i \in I$ and bi = 0 otherwise.
### Lemma
In the reaction network $\langle {\mathcal M},{\mathcal R} \rangle$ constructed as described in Appendix B, no organization can contain species $s_{k}^{0}$ and $s_{k}^{1}$ together. Therefore, no organization with a size (number of species) greater than N can exist.
Proof of this lemma is given below.
### Theorem
Set I of vertices is a maximal independent set if and only if the induced set SI of species is an organization.
### Proof
#### left to right
Let $I \subset V$ be a maximal independent set (MIS) and SI be the set of species induced by I. We have to show that SI is an organization, i.e. closed and self-maintaining.
Closure: Assume that SI is not closed, i.e. there exists a reaction $(A_j \rightarrow B_j) \in {\mathcal R}$ that produces a species that is not in the set SI. If the reaction has the form $s_j^1 \to s_k^0 (\in {\mathcal N})$ for a edge $(v_j,v_k)\in E$, then we know $s_j^1 \in S_{I}$ and thus $v_j \in I$. Since that reaction is assumed to violate the closure property, $s_k^0 \notin S_I$. We know then $s_k^1 \in S_I$ from the lemma. Because set SI contains both $s_j^1$ and $s_k^1$, set I must include both vj and vj. This leads to a contradiction that set I is a MIS and there is an edge $(v_j,v_k)\in E$.
On the other hand, the reaction can have the form $s_h^0 + s_l^0 + \dots + s_m^0 \to n_k s_k^1 (\in {\mathcal V})$. In that case we know that $s_p^0\in S_I$ for all neighbors vp of vertex vk. This means no vertexes vp neighboring vertex vk are included in set I. However, the vertex vk is not included in the set since species $s_k^1$ should not be in the set SI. This contradicts the fact that the independent set I is maximal.
From these arguments, no reaction can produce new species that is not in the set SI. Therefore, this set is closed.
Self-maintenance: Consider the flux vector $\mathbf{v}$ that is 1 for all reactions involving only elements of SI on the left-hand side, and 0 for all others. Given this $\mathbf{v}$, the rate of change of all species in SI is 0, since inflow and outflow of these species are of equal size. For all species not in SI, no reaction takes place, so their concentrations do not change. Therefore, $\mathbf{v}$ fulfills the self-maintainance condition and SI is self-maintaining.
#### Right to left
Given a set of vertices I and its induced set of species SI, which is an organization, we need to show that I is a MIS.
Given two vertexes vp and vq from the set I, we know that $s_p^1 \in S_I$ and $s_q^1 \in S_I$. If there exists an edge $(v_p,v_q) \in E$, the reaction $s_p^1 \to s_q^0$ would produce $s_q^0$ inside the organization SI, which is impossible since $s_q^1 \in S_I$. Therefore, we conclude that $(v_p,v_q) \notin E$ and I is an independent set.
If I were not a "maximal" IS, we could add a vertex $v_p \in V$ to I, and $I \cup \{v_p\}$ would still be an IS. In other words, there exists p such that a set $S'_I=((S_I \backslash \{s_p^0\}) \cup \{s_p^1\})$ is still an organization. Because of the presence of species $s_p^1$, set S'I must contain $s_q^0$ for all neighbors vq of vp $(p \neq q)$ to be closed due to the reactions in ${\mathcal N}$. However, this leads to a contradiction that the original set SI is not closed since the set also contains those species $s_q^0$ but does not contain $s_p^1$, produced by the reactions in ${\mathcal V}$. Thus, there does not exist such an index p.
### Proof of the lemma
Let $O \subset {\mathcal M}$ be an organization, and suppose the organization contains $s_{k}^0$ and $s_{k}^1$ simultaneously ($s_{k}^{0}, s_{k}^{1} \in O$).
From the definition of the organization to be self-maintaining, there exists a flux vector
$\mathbf{v} = (v_{A_1 \rightarrow B_1}, \dots, v_{A_j \rightarrow B_j}, \dots, v_{A_{|{\mathcal R}|} \rightarrow B_{|{\mathcal R}|}})^T$
satisfying the following three conditions:
• $v_{A_j \rightarrow B_j} > 0$ if $A_j \in \mathcal{P}_M(O)$
• $v_{A_j \rightarrow B_j} = 0$ if $A_j \notin \mathcal{P}_M(O)$
• $f_i \geq 0$ if $a_i \in O$ where $(f_1, \dots, f_i, \dots, f_{|{\mathcal M}|})^T = \mathbf{M v}$.
where $\mathbf{M}=(m_{ij})$ is a stoichiometric matrix.
Because of the third condition, the sum of the production rates fi with respect to the species in the organization should be bigger than zero:
$\sum_{\{i | a_i \in O\}} f_i = \sum_{\{i | a_i \in O\}} \sum_{j=1}^{|{\mathcal R}|} m_{ij}v_{A_j \rightarrow B_j} \geq 0$
We write the reaction indices as $\mathcal V, \mathcal N, \mathcal D$:
$\sum_{\{i | a_i \in O\}} f_i = \sum_{\{i | a_i \in O\}} \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal V}\}} m_{ij}v_{A_j \rightarrow B_j} + \sum_{\{i | a_i \in O\}} \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal N}\}} m_{ij}v_{A_j \rightarrow B_j} + \sum_{\{i | a_i \in O\}} \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal D}\}} m_{ij}v_{A_j \rightarrow B_j}$ $= \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal V}\}} \sum_{\{i | a_i \in O\}} m_{ij}v_{A_j \rightarrow B_j} + \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal N}\}} \sum_{\{i | a_i \in O\}} m_{ij}v_{A_j \rightarrow B_j} + \sum_{\{j |(A_j \rightarrow B_j)\in {\mathcal D}\}} \sum_{\{i | a_i \in O\}} m_{ij}v_{A_j \rightarrow B_j}$
The stoichiometric coefficients mij in the reactions of type $\mathcal V$ and $\mathcal N$ sum up to 0:
$\forall j | (A_j \rightarrow B_j) \in ({\mathcal V} \cup {\mathcal N}) : \sum_{i | a_i \in O} m_{ij} = 0$
Thus, the first two sums are equal to zero. It follows that the third term must be non-negative.
However, all the stoichiometric coefficients in reactions of type $\mathcal D$ are negative. If organization O contains both $s_k^0$ and $s_k^1$ at the same time, the flux for reaction $(s_{k}^0 + s_{k}^1 \to \emptyset)\in {\mathcal D}$ must be set to a positive value. Thus, the sum of the production rates cannot be positive, so at least one production rate has to be negative. This contradicts the definition of the organization. Given this fact, it is obvious that no organization bigger than N can exist.
|
2017-03-28 12:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116774797439575, "perplexity": 308.13759870808536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00131-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://chem.libretexts.org/Ancillary_Materials/Worksheets/Worksheets%3A_Inorganic_Chemistry/Worksheets%3A_Inorganic_Chemistry_(Guided_Inquery)/Symmetry_and_Vibrational_Spectra_(Worksheet)
|
# Symmetry and Vibrational Spectra (Worksheet)
Name: ______________________________
Section: _____________________________
Student ID#:__________________________
Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help.
Learning Objectives
• To use symmetry to find point groups
• To use character tables to predict how many vibrational modes should appear in the IR and Raman Spectra
• Be able to identify the symmetry elements in a compound
• Be able to identify the point group of a molecule
• Use the character table for a point group to predict vibrational spectra
An understanding of symmetry allows us to understand the bonding and physical properties of the compounds we are studying. Symmetry can be used to predict the nature of molecular orbitals and to predict both the electronic and vibration spectroscopic modes for a given molecule. We will begin with molecular symmetry, then find the appropriate point groups which will indicate the character tables available for the molecules and ultimately the the active vibrational modes.
## PART I - SYMMETRY
Axes of Rotation – Cn where n is how many times the angle you turn the molecule can be divided into 360°. BF3 has a C3 and three C2’s. The axis with the highest value is the principle axis and will help define the point group.
$$n = \frac{360}{\text{angle of rotation}}\] Mirror Planes – s this is a reflection through a plane within the molecule. There are three types, vertical ($$\sigma_{v}$$) which contains (“parallel”) the principle rotation axis, horizontal ($$\sigma_{h}$$) which is perpendicular to the principle axis and dihedral ($$\sigma_{d}$$) which contains the principle rotation axis but lies between C2’s that are perpendicular to the principle axis. XeF4 has examples of all three types of mirror planes. The sh contains all of the atoms, the $$\sigma_{v}$$ contains the F on ones side the Xe in the center and the F on the other. A $$\sigma_{d}$$ contains only the Xe and reflects the F’s on both sides (as shown in the wedges diagram, this would be perpendicular to this paper). Center of Inversion – i this allows for the molecule to be turned “inside out” through a single point. CoF6 has a center of inversion through the Co atom. Notice how the top and bottom F’s change position. Improper Rotation Axis – Sn This is a rotation followed by a reflection through a mirror plane perpendicular to the axis. CCl4 has an S4. Identity – E do nothing. For some molecules this will be the only symmetry element present. ## Key Questions 1. BF3 has 3 C2’s what elements are contained in the C2’s? Are they parallel or perpendicular to the principle axis? 2. How many C2’s are there in XeF4? 3. Define in your own words, the three mirror planes. 4. Indicate one example of each of the three planes using XeF4 as an example. 5. Using ethylene as an example, prove to yourself that a C2 is not the same as a center of inversion. 6. Prove that BF3 has an improper rotation axis, indicate what the symbol should be for that molecule. 7. Find the symmetry elements present for each of the following. BCl2F, CCl3Br, PtCl3Br (square planar), PtCl4 (square planar), trans-1,2-dibromo-1,2-dichloroethylene. 8. Download SymmetryApp.jar and the library from Blackboard and check your answers to #7. Note, you may need to put in dummy atoms to check some operations. ## PART II – Point Groups All of the symmetry operations for a particular molecule can be grouped together into point groups. These symbols allow us to be more specific about the geometry of a molecule. It eliminates controversial names such as “see saw” and they allow us to use character tables to better understand bonding as well as electronic and vibrational spectra. Later this semester we will use this to understand more fully Molecular Orbital Theory. Once you understand how to pick out symmetry elements for molecules you can start to use the following flowchart to determine the point groups of your molecules. Note This is a beginner’s crutch, you will need to do this without the chart for exams and if you plan to take the GRE subject exam in Chemistry. Example $$\PageIndex{1}$$: Find the Point Group for Water. So starting at the first question…. Water is not linear and it is not one of the special groups Td, Oh or Ih. It does have a principle rotation axis, a C2 that bisects the HOH angle. It does not have any perpendicular C2’s and it does not have a sh. Yes it does have a sv, one that contains the framework and the other that is perpendicular to the page and contains the O and reflects the H’s. Therefore, the Point group for water is C2v ## Key Questions 1. For all of the molecules in #7 in Part I, assign point groups. 2. In the boxes marked a, b and c on the flow chart, does the n mean that they molecules must have n number of those symmetry elements? Don’t look it up, rather deduce it by looking at different molecules that answer yes at those three boxes. (ie find a molecule with Cnv symmetry to answer the question at a and b and Dnd to answer the question at c). Use the SymmetryApp to help. ## Part III - Character Tables for Point Groups The complete set of symmetry operations for any point group is listed in a matrix called a Character Table. Let’s look at the character table for the C2v point group. Remember water has C2v symmetry. C2v E C2 $$\sigma_{v}$$ (xz) $$\sigma'_{v}$$ (yz) A1 1 1 1 1 z x2, y2, z2 A2 1 1 -1 -1 Rz xy B1 1 -1 1 -1 x, Ry xz B2 1 -1 -1 1 y, Rx yz Across the top of the table, notice the symmetry element symbols, if there are multiple elements present (i.e. 2C2’s) then a 2 would appear before the symbol to indicate the number of operations in that class. The order for the point group is the sum of the total number of operations (here it would be 4) Underneath C2v there are representations for the effect that each operation has on mathematical operations. The values in the boxes that cross (for example box E;A1 has a value of 1) are called characters. Characters indicate the effect of the symmetry operation on a given representation. ## Key Question 1. For the D4d point group, list the symmetry elements present, the number of each operation in that class, the representations and find the order. ## Part IV - Spectroscopy The boxes without a column heading at the end of the table are the spectroscopy active components. The indicate the Microwave (Rxyz), the IR (xyz) and the Raman (xy, yz, xz, x2, y2, z2) active representations. These change from point group to point group. We can use character tables and point groups to find the vibrational modes present for each molecule and predict the number of peaks in the IR and Raman spectra. To be IR active a molecule must have a change in dipole moment during a vibration, for a molecule to be Raman active a molecule must have a change in polarizability. Both techniques are measuring the change in the molecule when it aborbs light of a specific frequency. The number of modes for IR can be calculated by calculating the degrees of freedom using the formula 3N-6 for non-linear molecules and 3n-5 for linear molecules where N is the number of atoms. Using character tables and group theory we can better understand what atoms and vibrations correspond to the degrees of freedom calculated. ### How to find the IR active and Raman Active Modes using point groups 1. Find the point group of the molecule and the character table. 2. Determine the Reducible Representation, $$\Gamma$$ for the molecule. (what atoms move when you carry out the operations times the vector contribution) 3. Determine the Irreducible Representation. (use the following equation)$$N_{x} = \frac{1}{Order} \Sigma [ (\# \; \text{of operations in class}) \times (\Gamma) \times (\text{character of x})]$$4. Determine the number of IR active modes (x, y, z) and Raman active modes (quadratic functions of x, y, z in other words (xy, yz, xz, x2, y2, z2, or their combinations x2-y2) Example $$\PageIndex{2}$$: 1. Water has C2v symmetry. 2. To find $$\Gamma$$ we must consider the overall vector geometry for each atom. We can simplify this by doing this in two steps (a) consider which atoms are effected and then (b) consider the vectors associated with the xyz coordinate. #### Atom If an atom is unchanged, then it is given a value of 1, if it is moved a 0. For Water: • E operation leaves all atoms in place therefore it has a character of 3 (1 for each atom times 3 for 3 atoms) • C2 only the O is left unmoved therefore it will have a character of 1. • $$\sigma_{v}$$ that contains the framework, it will be 3 as no atoms move • $$\sigma_{v}$$ that is perpendicular to the framework of atoms, it will be 1 as both H’s move but the O is unchanged. #### Vectors For each atom, we need to figure out how the Cartesian coordinate will change. So the next thing to consider is how xyz moves for each operation. Just consider one of the atoms that move for this. If the vector is unchanged, +1 if it is reversed -1. For Water: • E – no change on the xyz therefore E has the value 3 • C2 only the H’s move and here the x and y turn into their negatives -2 and the z remains unchanged +1 so overall, it is a -1 • $$\sigma_{v}$$ that contains the framework, one axis becomes negative while the other two remain the same overall +1 • $$\sigma_{v}$$ that is perpendicular to the framework of atoms, one axis becomes negative while the other two remain the same overall +1 So now we can tabulate the number of unshifted atoms for each operation in the point group and the vector contributions per atom, multiplying them yields $$\Gamma$$. E C2 $$\sigma_{v(xz)}$$ $$\sigma_{v(yz)}$$ Unshifted atoms 3 1 1 3 Vector Contribution 3 -1 -1 1 $$\Gamma$$ 9 -1 1 3 1. Once we have $$\Gamma$$ we can find the Irreducible representations for each representation using the equation$$N_{x} = \frac{1}{Order} \Sigma [ (\# \; \text{of operations in class}) \times (\Gamma) \times (\text{character of x})]\]
\begin{split} N_{A_{1}} &= \frac{1}{4} [(1)(9)(1)] + [(1)(-1)(1)] + [(1)(1)(1)] + [(1)(3)(1)] = 3 \\ N_{A_{2}} &= \frac{1}{4} [(1)(9)(1)] + [(1)(-1)(1)] + [(1)(1)(-1)] + [(1)(3)(-1)] = 1 \\ N_{B_{1}} &= \frac{1}{4} [(1)(9)(1)] + [(1)(-1)(-1)] + [(1)(1)(1)] + [(1)(3)(-1)] = 2 \\ N_{B_{2}} &= \frac{1}{4} [(1)(9)(1)] + [(1)(-1)(-1)] + [(1)(1)(-1)] + [(1)(3)(1)] = 3 \end{split}\]
So the Irreducible representation is 3 A1 + A2 + 2B1 + 3 B2
1. At this point we need to subtract the translational and the rotational motions which are found from the x, y, z and the Rx, Ry and Rz symmetry functions. Subtraction yields: 2A1 + B2 so for water we would expect three absorptions in the IR (three modes).
## Key Questions
1. Calculate the degrees of freedom for the water molecule.
2. For XeF4, calculate the degrees of freedom expected.
3. Use the water example as a guide and find the representations for the IR active modes for XeF4.
4. What would change if we were only looking at the bonding in a molecule?
## Reference
• Kelley J. Donaghy, SUNY-ESF
|
2021-09-19 23:54:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990145564079285, "perplexity": 1889.933863660491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00028.warc.gz"}
|
https://community.slickedit.com/index.php?action=printpage;topic=16972.0
|
# SlickEdit Community
## SlickEdit Product Discussion => SlickEdit® => Topic started by: timur on June 20, 2019, 09:43:04 pm
Title: Does reflow comment work with Unix shell scripts?
Post by: timur on June 20, 2019, 09:43:04 pm
Is Document->Reflow Comment supposed to work on Unix shell scripts? I have tried all sorts of options and variations, and this feature never does anything for me.
One feature that I need often is to reflow a comment block so that the right margin is at the 80-character mark. I sometimes indent comment blocks as I copy/paste them to another part of my script and then indent the block. So for example:
Code: [Select]
if [[ -z $TEST ]]then # This is a very long comment line that reaches all the way to column 80 # and then some.fi becomes: Code: [Select] if [[ -z$TEST ]]then command if [ $? -ne 0 ] then # This is a very long comment line that reaches all the way to column 80 # and then some. fifi And I want Slickedit to change that to: Code: [Select] if [[ -z$TEST ]]then command if [ \$? -ne 0 ] then # This is a very long comment line that reaches all the way to column # 80 and then some. fifi
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: Clark on June 22, 2019, 12:25:12 am
Reflow Comment isn't supported for shell scripting languages yet and the menu item should be disabled but isn't (bug).
Looks like Reflow Comment is broken if Comment Wrap is off. Reflow Comment should work for say C++ which supports Comment Wrap even if Comment Wrap is off.
We will add hot fixes for this
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: timur on June 22, 2019, 03:21:03 am
Could you add comment reflow for other languages? If you can reflow // comments in C++, then surely it wouldn't be too difficult to reflow # comments in Bash.
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: Clark on June 22, 2019, 06:20:16 pm
I've added reflow_comment support for almost all languages SlickEdit supports. I was able to add support for Shell scripting languages (took some hardwired code though).
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: hs2 on June 22, 2019, 07:35:27 pm
Wow Clark .. ++HP ;D
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: guth on June 22, 2019, 08:08:35 pm
Talking about reflow, would it be possible to make reflow context aware in LaTeX documents? For instance, having the cursor on an \item in a \begin{itemize}\end{itemize} environment, reflow only reflows the current \item and not the current paragraph, as seen by slickedit.
Code: [Select]
\begin{itemize}\item a very long line and we want it to be reflowed at, say, 80th column, so after a relow, the item shall be like below.\item a very long line and we want it to be reflowed at, say, 80th column, soafter a relow, the item shall be like below.\end{itemize}
but doing a reflow paragraph having the cursor on the second line, i.e., somewhere on the first \item, the result is as follow:
Code: [Select]
\begin{itemize} \item a very long line and we want it to be reflowed at, say, 80th column, so after a relow, the item shall be like below. \item a very long line and we want it to be reflowed at, say, 80th column, so after a relow, the item shall be like below. \end{itemize}
Title: Re: Does reflow comment work with Unix shell scripts?
Post by: Clark on June 23, 2019, 12:39:46 am
I think it’s doable but I’m not that familiar with latex. As a work around, you can reflow a selection.
|
2020-09-24 01:52:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929864168167114, "perplexity": 10258.828132575773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00431.warc.gz"}
|
http://ptp.ipap.jp/cgi-bin/findarticle?journal=PTP&author=M.Oda
|
## Search Result
### Search Conditions
Years
All Years
for journal 'PTP'
author 'M.* Oda' : 28
total : 28
### Search Results : 28 articles were found.
1. Progress of Theoretical Physics Vol. 16 No. 3 (1956) pp. 250-251 :
On the Energy Dependence of the Cross Section for the Production of the Penetrating Shower Underground
S. Higashi, M. Oda, T. Oshio, H. Shibata, K. Watanabe and Y. Watase
2. Progress of Theoretical Physics Vol. 16 No. 3 (1956) pp. 252-254 :
On the Nuclear Interaction of $\mu$-Meson below Ground
Takashi Kitamura and Minoru Oda
3. Progress of Theoretical Physics Vol. 47 No. 1 (1972) pp. 304-316 : (5)
Non-Leptonic Hyperon Decays and Ur-citon Scheme
Shin Ishida, Katsuya Nakamura and Masuho Oda
4. Progress of Theoretical Physics Vol. 50 No. 6 (1973) pp. 2000-2026 : (5)
Shin Ishida, Masuho Oda and Yasuhito Yamazaki
5. Progress of Theoretical Physics Vol. 54 No. 2 (1975) pp. 542-554 : (5)
Universal Yukawa Interactions of Multi-Local Hadrons and Their Application to Ground Particles
Keisuke Furuya, Shin Ishida and Masuho Oda
6. Progress of Theoretical Physics Vol. 54 No. 3 (1975) pp. 899-901 : (5)
A Suppression Mechanism for Radially Excited States and New Particles
Shin Ishida and Masuho Oda
7. Progress of Theoretical Physics Vol. 54 No. 4 (1975) pp. 1221-1224 : (5)
New Particles and Freedom of an Internal String
Shin Ishida and Masuho Oda
8. Progress of Theoretical Physics Vol. 59 No. 1 (1978) pp. 291-293 : (5)
Decays of Baryon Resonances. I
Shin Ishida, Masuho Oda and Jun Otokozawa
9. Progress of Theoretical Physics Vol. 59 No. 1 (1978) pp. 294-296 : (5)
Decays of Baryon Resonances. II
Shin Ishida, Masuho Oda and Jun Otokozawa
10. Progress of Theoretical Physics Vol. 59 No. 3 (1978) pp. 959-963 : (5)
Mass Spectrum of Exotic Particles in a Bose Quark Model
Shin Ishida and Masuho Oda
11. Progress of Theoretical Physics Vol. 60 No. 3 (1978) pp. 828-839 : (5)
Di-Nucleon Exotics and Quark Statistics
Shin Ishida and Masuho Oda
12. Progress of Theoretical Physics Vol. 61 No. 5 (1979) pp. 1401-1411 : (5)
Di-Nucleon Exotics in a Joined Spring Model and Statistics of Quarks
Shin Ishida and Masuho Oda
13. Progress of Theoretical Physics Vol. 61 No. 5 (1979) pp. 1420-1425 : (5)
Bose Quarks and Non-Leptonic Weak Interactions of Charmed Mesons
Shin Ishida and Masuho Oda
14. Progress of Theoretical Physics Vol. 68 No. 3 (1982) pp. 883-897 : (5)
Multi-Quark Hadrons in the Joined-Spring Quark Model
Shin Ishida, Masuho Oda, Katsumi Takeuchi and Motohiko Watanabe
15. Progress of Theoretical Physics Vol. 71 No. 4 (1984) pp. 806-815 : (5)
Radiative Decays of Ground-State Mesons in the Covariant Quark Model
Shin Ishida, Susumu Hinata, Masuho Oda, Katsumi Takeuchi and Kenji Yamada
16. Progress of Theoretical Physics Vol. 73 No. 6 (1985) pp. 1502-1514 : (5)
Electromagnetic Form Factors of Deuteron at Large Momentum Transfers in the Covariant Oscillator Quark Model
Naofusa Honzawa, Shin Ishida, Yoshiki Kizukuri, Mikio Namiki, Masuho Oda, Keisuke Okano and Noriyuki Oshimo
17. Progress of Theoretical Physics Vol. 74 No. 4 (1985) pp. 939-942 : (5)
Covariant Description of Deuteron and “Intrinsic” Quadrupole Moment
Naofusa Honzawa, Shin Ishida and Masuho Oda
18. Progress of Theoretical Physics Vol. 82 No. 1 (1989) pp. 119-126 : (5)
Is the $f_{1}$(1420) Our First Hybrid Meson?
Shin Ishida, Masuho Oda, Haruhiko Sawazaki and Kenji Yamada
19. Progress of Theoretical Physics Vol. 88 No. 1 (1992) pp. 89-101 : (5)
“Variant Mass and Width” of the Axial-Vector $a_{1}$ and $b_{1}$ Mesons and Existence of Hybrid States
Shin Ishida, Masuho Oda, Haruhiko Sawazaki and Kenji Yamada
20. Progress of Theoretical Physics Vol. 89 No. 5 (1993) pp. 1033-1045 : (5)
A Universal Spring and Meson Trajectories
Shin Ishida and Masuho Oda
21. Progress of Theoretical Physics Vol. 93 No. 4 (1995) pp. 781-787 : (5)
Effective Weak Transition Currents of Light-through-Heavy-Quark Meson and Baryon Systems in the Covariant Oscillator Quark Model
Shin Ishida, Muneyuki Ishida and Masuho Oda
22. Progress of Theoretical Physics Vol. 93 No. 5 (1995) pp. 939-947 : (5)
Spin-Independent Confining Force and a Boosted $LS$-Coupling Scheme for Covariant Description of Hadron World
Shin Ishida, Muneyuki Ishida and Masuho Oda
23. Progress of Theoretical Physics Vol. 98 No. 1 (1997) pp. 159-167 : (5)
Spectra of Exclusive Semi-Leptonic Decays of $\boldsymbol{B}$-Meson in the Covariant Oscillator Quark Model
Muneyuki Ishida, Shin Ishida and Masuho Oda
24. Progress of Theoretical Physics Vol. 99 No. 2 (1998) pp. 257-270 : (5)
Electromagnetic Transitions of Heavy Quarkonia in the Boosted $\BM{LS}$-Coupling Scheme
Shin Ishida, Akiyoshi Morikawa and Masuho Oda
25. Progress of Theoretical Physics Vol. 101 No. 4 (1999) pp. 947-957 : (5)
Non-Leptonic Two-Meson Decays of $\mbf B$ Mesons in the Covariant Oscillator Quark Model with Factorization Ansatz
Rukmani Mohanta, Anjan K. Giri, Mohinder P. Khanna, Muneyuki Ishida, Shin Ishida and Masuho Oda
26. Progress of Theoretical Physics Vol. 101 No. 4 (1999) pp. 959-969 : (5)
Hadronic Weak Decays of $\mbf\Lambda_b$ Baryon in the Covariant Oscillator Quark Model
Rukmani Mohanta, Anjan K. Giri, Mohinder P. Khanna, Muneyuki Ishida, Shin Ishida and Masuho Oda
27. Progress of Theoretical Physics Vol. 101 No. 6 (1999) pp. 1285-1311 : (5)
Exclusive Semi-Leptonic Decays of Heavy Mesons in the Covariant Oscillator Quark Model
Masuho Oda, Muneyuki Ishida and Shin Ishida
28. Progress of Theoretical Physics Vol. 103 No. 6 (2000) pp. 1213-1225 : (5)
Semi-Leptonic ${\mib B}$ Meson Decays to Excited D Mesons in the Covariant Oscillator Quark Model
Masuho Oda, Kazunori Nishimura, Muneyuki Ishida and Shin Ishida
|
2013-05-23 05:51:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6654647588729858, "perplexity": 5737.66949413753}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702900179/warc/CC-MAIN-20130516111500-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2013595/prove-that-the-determinant-of-a-matrix-is-zero
|
# Prove that the determinant of a matrix is zero
Hi I need some help with this question:
Let $$A$$ be an $$n \times n$$ matrix, let $$i, j, k$$ be pairwise distinct indices, $$1 \leq i, j, k \leq n$$, and let $$\lambda,\mu \in \mathbb R$$ be arbitrary real numbers. Suppose that $$a_k$$, the $$k-$$th row vector of $$A$$, is equal to $$\lambda a_i + \mu a_j$$, where $$a_i, a_j ∈ \mathbb R^n$$ denote the $$i-$$th and the $$j-$$th row vectors of $$A$$ respectively. Prove that $$\det(A) = 0$$.
I think I need to split the matrix up into two separate ones then use the fact that one of these matrices has either a row of zeros or a row is a multiple of another then use $$\det(AB)=\det(A)\det(B)$$ to show one of these matrices has a determinant of zero so the whole thing has a determinant of zero. So I was wondering is there a way to split these matrices up so it suits my method?
|
2019-11-14 12:28:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865420460700989, "perplexity": 49.38596174077304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00223.warc.gz"}
|
https://www.mathworks.com/help/matlab/ref/trapz.html
|
# trapz
Trapezoidal numerical integration
## Syntax
``Q = trapz(Y)``
``Q = trapz(X,Y)``
``Q = trapz(___,dim)``
## Description
example
````Q = trapz(Y)` computes the approximate integral of `Y` via the trapezoidal method with unit spacing. The size of `Y` determines the dimension to integrate along:If `Y` is a vector, then `trapz(Y)` is the approximate integral of `Y`.If `Y` is a matrix, then `trapz(Y)` integrates over each column and returns a row vector of integration values.If `Y` is a multidimensional array, then `trapz(Y)` integrates over the first dimension whose size does not equal 1. The size of this dimension becomes 1, and the sizes of other dimensions remain unchanged.```
example
````Q = trapz(X,Y)` integrates `Y` with respect to the coordinates or scalar spacing specified by `X`. If `X` is a vector of coordinates, then `length(X)` must be equal to the size of the first dimension of `Y` whose size does not equal 1.If `X` is a scalar spacing, then `trapz(X,Y)` is equivalent to `X*trapz(Y)`. ```
example
````Q = trapz(___,dim)` integrates along the dimension `dim` using any of the previous syntaxes. You must specify `Y`, and optionally can specify `X`. If you specify `X`, then it can be a scalar or a vector with length equal to `size(Y,dim)`. For example, if `Y` is a matrix, then `trapz(X,Y,2)` integrates each row of `Y`.```
## Examples
collapse all
Calculate the integral of a vector where the spacing between data points is 1.
Create a numeric vector of data.
`Y = [1 4 9 16 25];`
`Y` contains function values for $f\left(x\right)={x}^{2}$ in the domain [1, 5].
Use `trapz` to integrate the data with unit spacing.
`Q = trapz(Y)`
```Q = 42 ```
This approximate integration yields a value of `42`. In this case, the exact answer is a little less, $41\frac{1}{3}$. The `trapz` function overestimates the value of the integral because f(x) is concave up.
Calculate the integral of a vector where the spacing between data points is uniform, but not equal to 1.
Create a domain vector.
`X = 0:pi/100:pi;`
Calculate the sine of `X`.
`Y = sin(X);`
Integrate `Y` using `trapz`.
`Q = trapz(X,Y)`
```Q = 1.9998 ```
When the spacing between points is constant, but not equal to 1, an alternative to creating a vector for `X` is to specify the scalar spacing value. In that case, `trapz(pi/100,Y)` is the same as `pi/100*trapz(Y)`.
Integrate the rows of a matrix where the data has a nonuniform spacing.
Create a vector of x-coordinates and a matrix of observations that take place at the irregular intervals. The rows of `Y` represent velocity data, taken at the times contained in `X`, for three different trials.
```X = [1 2.5 7 10]; Y = [5.2 7.7 9.6 13.2; 4.8 7.0 10.5 14.5; 4.9 6.5 10.2 13.8];```
Use `trapz` to integrate each row independently and find the total distance traveled in each trial. Since the data is not evaluated at constant intervals, specify `X` to indicate the spacing between the data points. Specify `dim = 2` since the data is in the rows of `Y`.
`Q1 = trapz(X,Y,2)`
```Q1 = 3×1 82.8000 85.7250 82.1250 ```
The result is a column vector of integration values, one for each row in `Y`.
Create a grid of domain values.
```x = -3:.1:3; y = -5:.1:5; [X,Y] = meshgrid(x,y);```
Calculate the function $f\left(x,y\right)={x}^{2}+{y}^{2}$ on the grid.
`F = X.^2 + Y.^2;`
`trapz` integrates numeric data rather than functional expressions, so in general the expression does not need to be known to use `trapz` on a matrix of data. In cases where the functional expression is known, you can instead use `integral`, `integral2`, or `integral3`.
Use `trapz` to approximate the double integral
`$I={\int }_{-5}^{5}{\int }_{-3}^{3}\left({x}^{2}+{y}^{2}\right)dx\phantom{\rule{0.2222222222222222em}{0ex}}dy$`
To perform double or triple integrations on an array of numeric data, nest function calls to `trapz`.
`I = trapz(y,trapz(x,F,2))`
```I = 680.2000 ```
`trapz` performs the integration over x first, producing a column vector. Then, the integration over y reduces the column vector to a single scalar. `trapz` slightly overestimates the exact answer of 680 because f(x,y) is concave up.
## Input Arguments
collapse all
Numeric data, specified as a vector, matrix, or multidimensional array. By default, `trapz` integrates along the first dimension of `Y` whose size does not equal 1.
Data Types: `single` | `double`
Complex Number Support: Yes
Point spacing, specified as `1` (default), a uniform scalar spacing, or a vector of coordinates.
• If `X` is a scalar, then it specifies a uniform spacing between the data points and `trapz(X,Y)` is equivalent to `X*trapz(Y)`.
• If `X` is a vector, then it specifies x-coordinates for the data points and `length(X)` must be the same as the size of the integration dimension in `Y`.
Data Types: `single` | `double`
Dimension to operate along, specified as a positive integer scalar. If no value is specified, then the default is the first array dimension whose size does not equal 1.
Consider a two-dimensional input array, `Y`:
• `trapz(Y,1)` works on successive elements in the columns of `Y` and returns a row vector.
• `trapz(Y,2)` works on successive elements in the rows of `Y` and returns a column vector.
If `dim` is greater than `ndims(Y)`, then `trapz` returns an array of zeros of the same size as `Y`.
collapse all
### Trapezoidal Method
`trapz` performs numerical integration via the trapezoidal method. This method approximates the integration over an interval by breaking the area down into trapezoids with more easily computable areas. For example, here is a trapezoidal integration of the sine function using eight evenly-spaced trapezoids:
For an integration with `N+1` evenly spaced points, the approximation is
`$\begin{array}{c}\underset{a}{\overset{b}{\int }}f\left(x\right)dx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\approx \text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{b-a}{2N}\sum _{n=1}^{N}\left(f\left({x}_{n}\right)+f\left({x}_{n+1}\right)\right)\\ =\frac{b-a}{2N}\left[f\left({x}_{1}\right)+2f\left({x}_{2}\right)+...+2f\left({x}_{N}\right)+f\left({x}_{N+1}\right)\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}},\end{array}$`
where the spacing between each point is equal to the scalar value $\frac{b-a}{N}$. By default MATLAB® uses a spacing of 1.
If the spacing between the `N+1` points is not constant, then the formula generalizes to
`$\underset{a}{\overset{b}{\int }}f\left(x\right)dx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\approx \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{1}{2}\sum _{n=1}^{N}\left({x}_{n+1}-{x}_{n}\right)\left[f\left({x}_{n}\right)+f\left({x}_{n+1}\right)\right]\text{\hspace{0.17em}},$`
where $a={x}_{1}<{x}_{2}<\text{\hspace{0.17em}}\text{\hspace{0.17em}}...\text{\hspace{0.17em}}\text{\hspace{0.17em}}<{x}_{N}<{x}_{N+1}=b$, and $\left({x}_{n+1}-{x}_{n}\right)$ is the spacing between each consecutive pair of points.
## Tips
• Use `trapz` and `cumtrapz` to perform numerical integrations on discrete data sets. Use `integral`, `integral2`, or `integral3` instead if a functional expression for the data is available.
• `trapz` reduces the size of the dimension it operates on to 1, and returns only the final integration value. `cumtrapz` also returns the intermediate integration values, preserving the size of the dimension it operates on.
|
2021-12-09 11:33:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115435481071472, "perplexity": 810.9334270295869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00485.warc.gz"}
|
https://stats.stackexchange.com/questions/344189/posterior-distribution-and-mcmc
|
# Posterior distribution and MCMC [duplicate]
I have read something like 6 articles on Markov Chain Monte carlo methods, there are a couple of basic points I can't seem to wrap my head around.
1. How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution?
2. Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution?
3. If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place?
This just seems like circular reasoning to me.
## marked as duplicate by Juho Kokkala, kjetil b halvorsen, Michael Chernick, mdewey, Scortchi♦May 7 '18 at 10:03
If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective.
How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution?
MCMC is based on the assumption that the product$$\pi(\theta)f(x^\text{obs}|\theta)$$can be numerically computed (hence is known) for a given $\theta$, where $x^\text{obs}$ denotes the observation, $\pi(\cdot)$ the prior, and $f(x^\text{obs}|\theta)$ the likelihood. This does not imply an in-depth knowledge about this function of $\theta$. Still, from a mathematical perspective the posterior density is completely and entirely determined by $$\pi(\theta|x^\text{obs})=\dfrac{\pi(\theta)f(x^\text{obs}|\theta)}{\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta}\tag{1}$$Thus, it is not particularly surprising that simulation methods can be found using solely the input of the product $$\pi(\theta)\times f(x^\text{obs}|\theta)$$ The amazing feature of Monte Carlo methods is that some methods like Markov chain Monte Carlo (MCMC) algorithms do not formally require anything further than this computation of the product, when compared with accept-reject algorithms for instance, which calls for an upper bound. A related software like Stan operates on this input and still delivers high end performances with tools like NUTS and HMC, including numerical differentiation.
A side comment written later in the light of some of the other answers is that the normalising constant$$\mathfrak{Z}=\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta$$is not particularly useful for conducting Bayesian inference in that, were I to "know" its exact numerical value in addition to the function in the numerator of (1), $\mathfrak{Z}=3.17232\,10^{-23}$ say, I would not have made any progress towards finding Bayes estimates or credible regions. (The only exception when this constant matters is in conducting Bayesian model comparison.)
When teaching about MCMC algorithms, my analogy is that in a videogame we have a complete map (the posterior) and a moving player that can only illuminate a portion of the map at once. Visualising the entire map and spotting the highest regions is possible with enough attempts (and a perfect remembrance of things past!). A local and primitive knowledge of the posterior density (up to a constant) is therefore sufficient to learn about the distribution.
Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution?
Again, the distribution is known in a mathematical or numerical sense. The Bayes parameter estimates provided by MCMC, if needed, are based on the same principle as most simulation methods, the law of large numbers. More generally, Monte Carlo based (Bayesian) inference replaces the exact posterior distribution with an empirical version. Hence, once more, a numerical approach to the posterior, one value at a time, is sufficient to build a convergent representation of the associated estimator. The only restriction is the available computing time, i.e., the number of terms one can call in the law of large numbers approximation.
If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place?
It is the very paradox of (1) that this is a perfectly well-defined mathematical object such that most integrals related with (1) including its denominator may be out of reach from analytical and numerical methods. Exploiting the stochastic nature of the object by simulation methods (Monte Carlo integration) is a natural and manageable alternative that has proven immensely helpful.
Connected X validated questions:
How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution?
In Bayesian analysis we usually know that the posterior distribution is proportional to some known function (the likelihood multiplied by the prior) but we don't know the constant of integration that would give us the actual posterior density:
$$\pi( \theta | \mathbb{x} ) = \frac{\overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}}{\underbrace{\int L_\mathbb{x}(\theta) \pi(\theta) d\theta}_{\text{Unknown}}} \overset{\theta}{\propto} \overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}.$$
So we actually do know one major property of the distribution; that it is proportional to a particular known function. Now, in the context of MCMC analysis, a Markov chain takes in a starting value $\theta_{(0)}$ and produces a series of values $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ...$ for this parameter.
The Markov chain has a stationary distribution which is the distribution that preserves itself if you run it through the chain. Under certain broad assumptions (e.g., the chain is irreducible, aperiodic), the stationary distribution will also be the limiting distribution of the Markov chain, so that regardless of how you choose the starting value, this will be the distribution that the outputs converge towards as you run the chain longer and longer. It turns out that it is possible to design a Markov chain with a stationary distribution equal to the posterior distribution, even though we don't know exactly what that distribution is. That is, it is possible to design a Markov chain that has $\pi( \theta | \mathbb{x} )$ as its stationary limiting distribution, even if all we know is that $\pi( \theta | \mathbb{x} ) \propto L_\mathbb{x}(\theta) \pi(\theta)$. There are various ways to design this kind of Markov chain, and these various designs constitute available MCMC algorithms for generating values from the posterior distribution.
Once we have designed an MCMC method like this, we know that we can feed in any arbitrary starting value $\theta_{(0)}$ and the distribution of the outputs will converge to the posterior distribution (since this is the stationary limiting distribution of the chain). So we can draw (non-independent) samples from the posterior distribution by starting with an arbitrary starting value, feeding it into the MCMC algorithm, waiting for the chain to converge close to its stationary distribution, and then taking the subsequent outputs as our draws.
This usually involves generating $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ..., \theta_{(M)}$ for some large value of $M$, and discarding $B < M$ "burn-in" iterations to allow the convergence to occur, leaving us with draws $\theta_{(B+1)}, \theta_{(B+2)}, \theta_{(B+3)}, ..., \theta_{(M)} \sim \pi( \theta | \mathbb{x} )$ (approximately).
If you already know the properties of your posterior distribution ... then what's the point of using this method in the first place?
Use of the MCMC simulation allows us to go from a state where we know that the posterior distribution is proportional to some given function (the likelihood multiplied by the prior) to actually simulating from this distribution. From these simulations we can estimate the constant of integration for the posterior distribution, and then we have a good estimate of the actual distribution. We can also use these simulations to estimate other aspects of the posterior distribution, such as its moments.
Now, bear in mind that MCMC is not the only way we can do this. Another method would be to use some other method of numerical integration to try to find the constant-of-integration for the posterior distribution. MCMC goes directly to simulation of the values, rather than attempting to estimate the constant-of-integration, so it is a popular method.
Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is multivariate, and you want to marginalize over some dimensions of $\theta$ but not others. So for instance, $\theta$ might be a vector of 10 parameters, and you're interested in the marginal distribution $p(\theta_1|X)=\int p(\theta|X)d\theta_{2:10}$. The integrals required to do this marginalization are often very hard to compute exactly. They may be analytically intractable, and (deterministic) numerical integration is often cumbersome in high dimensions.
This is where MCMC can help. As long as you know $p(\theta|X)$ up to a constant of multiplication, you can generate samples of $\theta$ that follow this distribution. Then, given a sufficient number of such samples, you can simply look at the distribution of sampled values of $\theta_1$ (e.g. by making a histogram), and those samples will approximate the desired marginal distribution. Compared to numerical integration methods, MCMC is more efficient because it spends more time exploring parts of the distribution where more of the probability mass is concentrated. Also, many MCMC algorithms (such as the classic Metropolis Hastings algorithm) only require that you know the target distribution up to a constant of proportionality, which is helpful if you don't know the normalization constant required to make the distribution proper (which is very often the case, because to compute that constant itself often requires computing a multivariate integral just as complex as the one you're interested in).
Edit: it occurred to me that this perhaps doesn't fully answer your first question. The answer to this is that MCMC only requires that you can calculate the posterior probability (density) of a certain parameter value (up to a constant of proportionality). So all you need is a function where, if you put a parameter value in, it gives you its probability under the target distribution (or a value proportional to that probability). That is the sense in which the target distribution must be 'known'. But you don't need to know anything else about it. You can be blissfully ignorant about the mean & covariance of the distribution, or about the little squiggles and bumps that it has here or there, or any number of other things (although some of those things can be helpful to know in order to make MCMC run more smoothly).
Just one example to address part (1).
Sometimes you can evaluate the posterior up to a partition function only.
For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown.
The metropolis hasting algorithm:
-Initialize $x_0$
-Choose some distribution $q$
Repeat:
-Sample $y$ from $q(x_{i-1})$
-Accept $y$ if $p(y)$ is large (essentially) via an "acceptance rule"
-if accepted set $x_i=y$
But at each step we don't know $p(y)$, we only know $f(y)$ because $z$ is unknown. However, The acceptance rule can be written (essentially) as a ratio of $p(x_{i-1})$ and $p(y)$ so $z$ cancels.
The final output of the sampling then provides $p(x), z$ included, but you never had to compute (or know) $z$.
|
2019-06-20 19:49:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356870412826538, "perplexity": 291.95239100704043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00543.warc.gz"}
|
https://stable.publiclab.org/notes/warren/09-28-2016/fly-a-small-camera-on-a-very-portable-squid-shaped-sled-kite
|
# Public Lab Research note
This is an attempt to replicate an activity.
# Fly a small camera on a very portable (squid-shaped) sled kite
by warren | 28 Sep 22:52
### What I want to do
I set out to test out a setup pretty similar to the Mini Kite Kit the Public Lab Kits initiative has been piloting -- which folds up into a fabric bag, and is really easy to pack in a bag.
I don't have that kit, but @liz gave me a beautiful squid shaped kite which seems to be of similar construction -- a sled shape, though with far more tails. She said it cost $4 in Taiwan -- Liz, is that correct??? (Update: apparently$2 ??? See link/comment below)
And it's a bit bigger, but still packs down real small with no spars. I'm happy to post this as its own activity if it's not close enough to the original, but I thought it could add confidence to the idea of a highly portable kite mapping kit built around a #mobius camera and a sled-type kite.
## My attempt and results
I flew at George Island in the Boston harbor, so I had really clear, 10-15mph wind -- flags were mostly extended in the wind, although it started out a bit calmer.
We'd had trouble flying this kite at #LEAFFEST a few weeks ago in inconsistent, gusty wind in a mountainside clearing, so I was nervous. But it was SO GREAT -- easy to get into the air by myself, stable enough to walk around with slowly, even at only 20-30 feet up.
### Camera setup
It easily carried the 40 gram Mobius camera, although I did something wrong with the settings so I don't have any good pictures. I put the Mobius on a piece of taped-up foam core, with a carabiner attached via a key ring. The Mobius is rubber-banded to the board, with a piece of tape inside-out between it and the board. This is really secure and easy. See detail in the main image above.
### Questions and next steps
I'd love to repeat and take pictures -- I've done this with rigid kites, like this flight in Barcelona, but it'd be great to see it with a cheaper, more portable setup.
I also used a 1000 foot reel of 50 pound kite string, instead of my usual 100 pound string. This is lighter, and with a narrower profile, may have less drag and fly at a steeper angle than the thicker string. I think it's viable for this kite -- maybe with some wind speed guidelines.
### Why I'm interested
Testing out/replicating the mini kite kit, so more people can give it a try! My whole setup, laid out in the lead image, is really compact now.
### Try it yourself
If you have a Mini Kite Kit, please try this out and try it out yourself, posting photos and noting any tweaks you made, or difficulties you encountered!
Help out by offering feedback! Browse other activities for "aerial-photography"
### People who did this (0)
None yet. Be the first to post one!
The 5.5M kite was ~USD$4 on taobao in mainland China -- @shanlter can you post the link to the taobao store that you helped me buy from? Is this a question? Click here to post it to the Questions page. Reply to this comment... omg you're kidding me that is so cheap this kite is so awesome Reply to this comment... The delivery was convenient too -- we bought a dozen, and they were shipped in a tape-wrapped tarp to a local mini-mart in Guangzhou where we picked it up: Reply to this comment... The squid looks so nice in the sky! Here is the link: https://detail.tmall.com/item.htm?spm=a230r.1.0.0.YLWxJc&id=44263991526&ns=1&skuId=97095576442 (the 4 meter purple one only cost$2! ^-^)
Is this a question? Click here to post it to the Questions page.
Thanks for the link! So many colors :-) --
You had me at "squid-shaped kite."
I've been using the DIY Mini Kite Kit but for the life of me cannot seem to get the Mobius camera to go into time lapse mode. I tried the setup app and the settings look right, but doesn't seem to work. Anyone else had issues with this?
Is this a question? Click here to post it to the Questions page.
Hmm, i have an older one, so the firmware may have changed. Maybe best to ask about this on the main Grassroots Mapping list, as I think I remember other folks talking about the changes?
Is this a question? Click here to post it to the Questions page.
|
2021-05-07 23:06:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1727745085954666, "perplexity": 2552.588844016819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00577.warc.gz"}
|
http://www.ck12.org/book/Basic-Speller-Student-Materials/r1/section/12.18/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 12.18: Some More About <gh>
Difficulty Level: At Grade Created by: CK-12
1. You've seen that in a very few words [g] is spelled <gh>. But <gh> is not always pronounced [g]: Sometimes it is pronounced [f], and sometimes it is not pronounced at all. Carefully read the following words with <gh>. Be sure you know how each one is pronounced. Mark each word to show what the <gh> spells as we have done with ghastly, freight, and toughness. Use the zero sign, \begin{align*}[\varnothing]\end{align*}, if the <gh> is not pronounced at all.
\begin{align*}& \text{ghastly} && \text{ghosts} && \text{roughen} && \text{ghoulish} && \text{eighth} && \text{overweight} \\ & [g] \\ & \text{freight} && \text{coughed} && \text{neighbor} && \text{tightest} && \text{delightful} && \text{ghetto} \\ & \quad [\varnothing] \\ & \text{toughness} && \text{enough} && \text{although} && \text{laughter} && \text{knight} && \text{height}\\ & \qquad [f] \end{align*}
2. Sort the words into this matrix:
3. When <gh> comes at the beginning of an element, how is it pronounced? _________. When <gh> spells the sound [f], is it at the front, middle, or end of the element it is in? _________. When <gh> spells the sound [f], does it have a short vowel in front of it, or a long vowel? _________ If there is a long vowel sound right in front of <gh>, is it pronounced or not pronounced? _________.
Word Find. This Find contains at least twenty-three words that contain the spelling <gh>. As you find them sort them into the groups described below:
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
2016-10-25 00:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836819767951965, "perplexity": 11458.280442380026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719815.3/warc/CC-MAIN-20161020183839-00392-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.johndcook.com/blog/tag/special-functions/
|
# Superfactorial
The factorial of a positive integer n is the product of the numbers from 1 up to and including n:
n! = 1 × 2 × 3 × … × n.
The superfactorial of n is the product of the factorials of the numbers from 1 up to and including n:
S(n) = 1! × 2! × 3! × … × n!.
For example,
S(5) = 1! 2! 3! 4! 5! = 1 × 2 × 6 × 24 × 120 = 34560.
Here are three examples of where superfactorial pops up.
## Vandermonde determinant
If V is the n by n matrix whose ij entry is ij-1 then its determinant is S(n-1). For instance,
V is an example of a Vandermonde matrix.
## Permutation tensor
One way to define the permutation symbol uses superfactorial:
## Barnes G-function
The Barnes G-function extends superfactorial to the complex plane analogously to how the gamma function extends factorial. For positive integers n,
Here’s plot of G(x)
produced by
Plot[BarnesG[x], {x, -2, 4}]
in Mathematica.
# Negative space graph
Here is a plot of the first 30 Chebyshev polynomials. Notice the interesting patterns in the white space.
Forman Acton famously described Chebyshev polynomials as “cosine curves with a somewhat disturbed horizontal scale.” However, plotting cosines with frequencies 1 to 30 gives you pretty much a solid square. Something about the way Chebyshev polynomials disturb the horizontal scale creates the interesting pattern in negative space. (The distortion is different for each polynomial; otherwise the cosine picture would be a rescaling of the Chebyshev picture.)
I found the example above in a book that referenced a book by Theodore Rivlin. There’s a new edition of Rivlin’s book coming out in August, so maybe it will say something about the gaps.
Update: Here are analogous graphs for Legendre polynomials, plotting the even and odd ordered polynomials separately. They also have conspicuous holes, but they don’t fill the unit square the way Chebyshev polynomials do.
# To integrate the impossible integral
In the Broadway musical Man of La Mancha, Don Quixote sings
To dream the impossible dream
To fight the unbeatable foe
To bear with unbearable sorrow
To run where the brave dare not go
Yesterday my daughter asked me to integrate the impossible integral, and this post has a few thoughts on the quixotic quest to run where the brave calculus student dare not go.
The problem apparently required computing the indefinite integral of the square root of sine:
I say apparently for reasons that will soon be clear.
My first thought was that some sort of clever u-substitution would work. This was coming from a homework problem, so I was in the mindset of expecting every problem to be doable. If I ran across this integral while I had my consultant hat on rather than my father/tutor hat, I would have thought “It probably doesn’t have an elementary closed form because most integrals don’t.”
It turns out I should have had my professional hat on, because the integral does indeed not have an elementary closed form. There is a well-known function which is an antiderivative of √sin, but it’s not an elementary function. (“Well-known” relative to professional mathematicians, not relative to calculus students.)
You can evaluate the integral as
where E is Legendre’s “elliptic integral of the second kind.”
I assume it wasn’t actually necessary to compute the antiderivative, though I didn’t see the full problem. In the artificial world of the calculus classroom, everything works out nicely. And this is a shame.
For one thing, it creates a false impression. Most integrals are “impossible” in the sense of not having an elementary closed form. Students who go on to use calculus for anything more than artificial homework problems may incorrectly assume they’ve done something wrong when they encounter an integral in the wild.
For another, it’s a missed opportunity. Or maybe two or three missed opportunities. These may or may not be appropriate to bring up, depending on how receptive the audience is. (But we said at the beginning we would “run where the brave calculus student dare not go.”)
It’s a missed opportunity to emphasize what integrals really are, to separate meaning from calculation technique. Is the integral of √sin impossible? Not in the sense that it doesn’t exist. Yes, there are functions whose derivative is √sin, at least if we limit ourselves to a range where sine is not negative. No, the integral does not exist in the sense of a finite combination of functions a calculus student would have seen.
It’s also a missed opportunity to show that we can define new functions as the solution to problems we can’t solve otherwise. The elliptic integrals mentioned above are functions defined in terms of integrals that cannot be computed in more elementary terms. By giving the function a name, we can compare where it comes up in other contexts. For example, I wrote a few months ago about the problem of finding the length of a serpentine wall. This also pops out of a calculus problem that doesn’t have an elementary solution, and in fact it also has a solution in terms of elliptic integrals.
Finally, it’s a missed opportunity to give a glimpse of the wider mathematical landscape. If not everything elementary function has an elementary antiderivative, do they at least have antiderivatives in terms of special functions such as elliptic integrals? No, but that’s a good question.
Any continuous function has an antiderivative, and that antiderivative might be an elementary function, or it might be a combination of elementary functions and familiar special functions. Or it might be an obscure special function, something not exactly common, but something that has been named and studied before. Or it might be a perfectly valid function that hasn’t been given a name yet. Maybe it’s too specialized to deserve a name, or maybe you’ve discovered something that comes up repeatedly and deserves to be cataloged.
This touches more broadly on the subject of what functions exist versus what functions have been named. Students implicitly assume these two categories are the same. Here’s an example of the same phenomenon in probability. It also touches on the gray area between what has been named and what hasn’t, and how you decide whether something deserves to be named.
Update: The next post gives a simple approximation to the integral in this post.
# Chebyshev’s other polynomials
There are two sequences of polynomials named after Chebyshev, and the first is so much more common that when authors say “Chebyshev polynomial” with no further qualification, they mean Chebyshev polynomials of the first kind. These are denoted with Tn, so they get Chebyshev’s initial [1]. The Chebyshev polynomials of the second kind are denoted Un.
Chebyshev polynomials of the first kind are closely related to cosines. It would be nice to say that Chebyshev polynomials of the second kind are to sines what Chebyshev polynomials of the first kind are to cosines. That would be tidy, but it’s not true. There are relationships between the two kinds of Chebyshev polynomials, but they’re not that symmetric.
It is true that Chebyshev polynomials of the second kind satisfy a relation somewhat analogous to the relation
for his polynomials of the first kind, and it involves sines:
We can prove this with the equation we’ve been discussing in several posts lately, so there is yet more juice to squeeze from this lemon.
Once again we start with the equation
and take the complex part of both sides. The odd terms of the sum contribute to the imaginary part, so we can assume j = 2k + 1. We make the replacement
and so we’re left with a polynomial in cos θ, except for an extra factor of sin θ in every term.
This shows that sin nθ / sin θ, is a polynomial in cos θ, and in fact a polynomial of degree n-1. Given the theorem that
it follows that the polynomial in question must be Un-1.
## More special function posts
[1]Chebyshev has been transliterated from the Russian as Chebysheff, Chebychov, Chebyshov, Tchebychev, Tchebycheff, Tschebyschev, Tschebyschef, Tschebyscheff, Chebychev, etc. It is conventional now to use “Chebyshev” as the name, at least in English, and to use “T” for the polynomials.
# More juice in the lemon
There’s more juice left in the lemon we’ve been squeezing lately.
A few days ago I first brought up the equation
which holds because both sides equal exp(inθ).
Then a couple days ago I concluded a blog post by noting that by taking the real part of this equation and replacing sin²θ with 1 – cos²θ one could express cos nθ as a polynomial in cos θ,
and in fact this polynomial is the nth Chebyshev polynomial Tn since these polynomials satisfy
Now in this post I’d like to prove a relationship between Chebyshev polynomials and sines starting with the same raw material. The relationship between Chebyshev polynomials and cosines is well known, even a matter of definition depending on where you start, but the connection to sines is less well known.
Let’s go back to the equation at the top of the post, replace n with 2n + 1, and take the imaginary part of both sides. The odd terms of the sum contribute to the imaginary part, so we sum over 2ℓ+ 1.
Here we did a change of variables k = n – ℓ.
The final expression is the expression we began with, only evaluated at sin θ instead of cos θ. That is,
So for all n we have
and for odd n we also have
The sign is positive when n is congruent to 1 mod 4 and negative when n is congruent to 3 mod 4.
# Product of Chebyshev polynomials
Chebyshev polynomials satisfy a lot of identities, much like trig functions do. This point will look briefly at just one such identity.
Chebyshev polynomials Tn are defined for n = 0 and 1 by
T0(x) = 1
T1(x) = x
and for larger n using the recurrence relation
Tn+1(x) = 2xTn(x) – Tn-1(x)
This implies
T2(x) = 2xT1(x) – T0(x) = 2x2 – 1
T3(x) = 2xT2(x) – T1(x) = 4x3 – 3x
T4(x) = 2xT3(x) – T2(x) = 8x4 – 8x2 + 1
and so forth.
Now for the identity for this post. If mn, then
2 Tm Tn = Tm+n + Tmn.
In other words, the product of the mth and nth Chebyshev polynomials is the average of the (m + n)th and (mn)th Chebyshev polynomials. For example,
2 T3(x) T1(x) = 2 (4x3 – 3x) x = T4(x) + T2(x)
The identity above is not at all apparent from the recursive definition of Chebyshev polynomials, but it follows quickly from the fact that
Tn(cos θ) = cos nθ.
Proof: Let θ = arccos x. Then
2 Tm(x) Tn(x)
= 2 Tm(cos θ) Tn(cos θ)
= 2 cos mθ cos nθ
= cos (m+n)θ + cos (mn
= Tm+n(cos θ) + Tmn(cos θ)
= Tm+n(x) + Tmn(x)
You might object that this only shows that the first and last line are equal for values of x that are cosines of some angle, i.e. values of x in [-1, 1]. But if two polynomials agree on an interval, they agree everywhere. In fact, you don’t need an entire interval. For polynomials of degree m+n, as above, it is enough that they agree on m + n + 1 points. (Along those lines, see Binomial coefficient trick.)
The close association between Chebyshev polynomials and cosines means you can often prove Chebyshev identities via trig identities as we did above.
Along those lines, we could have taken
Tn(cos θ) = cos nθ
as the definition of Chebyshev polynomials and then proved the recurrence relation above as a theorem, using trig identities in the proof.
Forman Acton suggested in this book Numerical Methods that Work that you should think of Chebyshev polynomials as “cosine curves with a somewhat disturbed horizontal scale.”
# Generalization of power polynomials
A while back I wrote about the Mittag-Leffler function which is a sort of generalization of the exponential function. There are also Mittag-Leffler polynomials that are a sort of generalization of the powers of x; more on that shortly.
## Recursive definition
The Mittag-Leffler polynomials can be defined recursively by M0(x) = 1
and
for n > 0.
## Contrast with orthogonal polynomials
This is an unusual recurrence if you’re used to orthogonal polynomials, which come up more often in application. For example, Chebyshev polynomials satisfy
and Hermite polynomials satisfy
as I used as an example here.
All orthogonal polynomials satisfy a two-term recurrence like this where the value of each polynomial can be found from the value of the previous two polynomials.
Notice that with orthogonal polynomial recurrences the argument x doesn’t change, but the degrees of polynomials do. But with Mittag-Leffler polynomials the opposite is true: there’s only one polynomial on the right side, evaluated at three different points: x+1, x, and x-1.
## Generalized binomial theorem
Here’s the sense in which the Mittag-Leffler polynomials generalize the power function. If we let pn(x) = xn be the power function, then the binomial theorem says
Something like the binomial theorem holds if we replace pn with Mn:
# Stable and unstable recurrence relations
The previous post looked at computing recurrence relations. That post ends with a warning that recursive evaluations may nor may not be numerically stable. This post will give examples that illustrate stability and instability.
There are two kinds of Bessel functions, denoted J and Y. These are called Bessel functions of the first and second kinds respectively. These functions carry a subscript n denoting their order. Both kinds of Bessel functions satisfy the same recurrence relation:
fn+1 – (2n/x) fn + fn-1 = 0
where f is J or Y.
If you apply the recurrence relation in the increasing direction, it is unstable for J but stable for Y.
If you apply the recurrence relation in the opposite direction, it is stable for J and unstable for Y.
We will illustrate the above claims using the following Python code. Since both kinds of Bessel function satisfy the same recurrence, we pass the Bessel function in as a function argument. SciPy implements Bessel functions of the first kind as jv and Bessel functions of the second kind as yv. [1]
from scipy import exp, pi, zeros
from scipy.special import jv, yv
def report(k, computed, exact):
print(k, computed, exact, abs(computed - exact)/exact)
def bessel_up(x, n, f):
a, b = f(0, x), f(1, x)
for k in range(2, n+1):
a, b = b, 2*(k-1)*b/x - a
report(k, b, f(k,x))
def bessel_down(x, n, f):
a, b = f(n,x), f(n-1,x)
for k in range(n-2, -1, -1):
a, b = b, 2*(k+1)*b/x - a
report(k, b, f(k,x))
We try this out as follows:
bessel_up(1, 20, jv)
bessel_down(1, 20, jv)
bessel_up(1, 20, yv)
bessel_down(1, 20, yv)
When we compute Jn(1) using bessel_up, the relative error starts out small and grows to about 1% when n = 9. The relative error increases rapidly from there. When n = 10, the relative error is 356%.
For n = 20, the recurrence gives a value of 316894.36 while the true value is 3.87e-25, i.e. the computed value is 30 orders of magnitude larger than the correct value!
When we use bessel_down, the results are correct to full precision.
Next we compute Yn(1) using bessel_up the results are correct to full precision.
When we compute Yn(1) using bessel_down, the results are about as bad as computing Jn(1) using bessel_up. We compute Y0(1) as 0 5.7e+27 while the correct value is roughly 0.088.
There are functions, such as Legendre polynomials, whose recurrence relations are stable in either direction, at least for some range of inputs. But it would be naive to assume that a recurrence is stable without some exploration.
## Miller’s algorithm
There is a trick for using the downward recurrence for Bessel functions known as Miller’s algorithm. It sounds crazy at first: Assume JN(x) = 1 and JN+1(x) = 0 for some large N, and run the recurrence downward.
Since we don’t know JN(x) our results be off by some constant proportion. But there’s a way to find out what that proportionality constant is using the relation described here.
We add up out computed values for the terms on the right side, then divide by the sum to normalize our estimates. Miller’s recurrence algorithm applies more generally to other recurrences where the downward recurrence is stable and there exists a normalization identity analogous to the one for Bessel functions.
The following code lets us experiment with Miller’s algorithm.
def miller(x, N):
j = zeros(N) # array to store values
a, b = 0, 1
for k in range(N-1, -1, -1):
a, b = b, 2*(k+1)*b/x - a
j[k] = b
norm = j[0] + sum(2*j[k] for k in range(2,N,2))
j /= norm
for k in range(N-1, -1, -1):
expected, computed = j[k], jv(k,x)
report(k, j[k], jv(k,x))
When we call miller(pi, 20) we see that Miller’s method computes Jn(π) accurately. The error starts out moderately small and decreases until the results are accurate to floating point precision.
|----+------------|
| k | rel. error |
|----+------------|
| 19 | 3.91e-07 |
| 17 | 2.35e-09 |
| 16 | 2.17e-11 |
| 15 | 2.23e-13 |
| 14 | 3.51e-15 |
|----+------------|
For smaller k the relative error is also around 10-15, i.e. essentially full precision.
[1] Why do the SciPy names end in “v”? The order of a Bessel function does not have to be an integer. It could be any real number, and the customary mathematical notation is to use a Greek letter ν (nu) as a subscript rather than n as a reminder that the subscript might not represent an integer. Since a Greek ν looks similar to an English v, SciPy uses v as a sort of approximation of ν.
# Analogies between Weierstrass functions and trig functions
If you look at the Wikipedia article on Weierstrass functions, you’ll find a line that says “the relation between the sigma, zeta, and ℘ functions is analogous to that between the sine, cotangent, and squared cosecant functions.” This post unpacks that sentence.
## Weierstrass p function
First of all, what is ℘? It’s the Weierstrass elliptic function, which is the mother of all elliptic functions in some sense. All other elliptic functions can be constructed from this function and its derivatives. As for the symbol itself, ℘ is the traditional symbol for the Weierstrass function. It’s U+2118 in Unicode, ℘ in HTML, and \wp in LaTeX.
The line above suggests that ℘(x) is analogous to csc²(x). Indeed, the plots of the two functions are nearly identical.
Here’s Weierstrass’ function:
And here’s csc²:
The two plots basically agree to within the thickness of a line.
The Weierstrass function ℘ has two parameters that I haven’t mentioned. Elliptic functions are periodic in two directions in the complex plane, and so their values everywhere are determined by their values over a parallelogram. The two parameters specify the fundamental parallelogram, or at least that’s one way of parametrizing the function. The WeierstrassP function in Mathematica takes two other parameters, called the invariants, and these invariants were chosen in the plot above to match the period of the cosecant function.
Plot[Sin[x]^-2, {x, -10, 10}]
Plot[WeierstrassP[
x,
WeierstrassInvariants[{Pi/2, Pi I/2}]
],
{x, -10, 10}
]
The fundamental parallelogram is defined in terms of half periods, usually denoted with ω’s, and the invariants are denoted with g‘s. The function WeierstrassInvariants converts from half periods to invariants, from ω’s to g‘s.
Note that ℘ and cosecant squared are similar along the real axis, but they’re very different in the full complex plane. Trig functions are periodic along the real axis but grow exponentially along the complex axis. Elliptic functions are periodic along both axes.
## Weierstrass zeta function
The Weierstrass zeta function is not an elliptic function. It is not periodic but rather quasiperiodic. The derivative of the Weierstrass zeta function is negative of the Weierstrass elliptic function, i.e.
ζ ‘(x) = -℘(x)
which is analogous to the fact that the derivative of cot(x) is -csc²(x). So in that sense ζ is to ℘ as cotangent is to cosecant squared.
The plots of ζ(x) and cot(x) are similar as shown below.
The Mathematica code to make the plot above was
Plot[
{WeierstrassZeta[x, WeierstrassInvariants[{Pi/2, Pi I/2}]],
Cot[x]},
{x, -10, 10},
PlotLabels -> {"zeta", "cot"}
]
## Weierstrass sigma function
The Weierstrass sigma function is also not an elliptic function. It is analogous to the sine function as follows. The logarithmic derivative of the Weierstrass sigma function is the Weierstrass zeta function, just as the logarithmic derivative of sine is cotangent. That is,
(log(σ(x))’ = ζ(x).
The logarithmic derivative of a function is the derivative of its log, and so the logarithmic derivative of a function f is f ‘ / f.
However, the plot of the sigma function, WeierstrassSigma in Mathematica, hardly looks line sine.
So in summary, logarithmic derivative takes Weierstrass sigma to Weierstrass zeta just as it takes sine to cotangent. Negative derivative takes Weierstrass zeta to Weierstrass ℘ just as it takes cotangent to negative cosecant squared.
# Area of sinc and jinc function lobes
Someone left a comment this morning on my blog post on sinc and jinc integrals regarding the area of the lobes.
It would be nice to have the values of integrals of each lobe, i.e. integrals between 0 and multiples of pi. Anyone knows of such a table?
This post will include Python code to address that question. (Update: added asymptotic approximation. See below.)
First, let me back up and explain the context. The sinc function is defined as [1]
sinc(x) = sin(x) / x
and the jinc function is defined analogously as
jinc(x) = J1(x) / x,
substituting the Bessel function J1 for the sine function. You could think of Bessel functions as analogs of sines and cosines. Bessel functions often come up when vibrations are described in polar coordinates, just as sines and cosines come up when using rectangular coordinates.
Here’s a plot of the sinc and jinc functions:
The lobes are the regions between crossings of the x-axis. For the sinc function, the lobe in the middle runs from -π to π, and for n > 0 the nth lobe runs from nπ to (n+1)π. The zeros of Bessel functions are not uniformly spaced like the zeros of the sine function, but they come up in application frequently and so it’s easy to find software to compute their locations.
First of all we’ll need some imports.
from scipy import sin, pi
from scipy.special import jn, jn_zeros
from scipy.integrate import quad
The sinc and jinc functions are continuous at zero, but the computer doesn’t know that [2]. To prevent division by zero, we return the limiting value of each function for very small arguments.
def sinc(x):
return 1 if abs(x) < 1e-8 else sin(x)/x
def jinc(x):
return 0.5 if abs(x) < 1e-8 else jn(1,x)/x
You can show via Taylor series that these functions are exact to the limits of floating point precision for |x| < 10-8.
Here’s code to compute the area of the sinc lobes.
def sinc_lobe_area(n):
n = abs(n)
integral, info = quad(sinc, n*pi, (n+1)*pi)
return 2*integral if n == 0 else integral
The corresponding code for the jinc function is a little more complicated because we need to compute the zeros for the Bessel function J1. Our solution is a little clunky because we have an upper bound N on the lobe number. Ideally we’d work out an asymptotic value for the lobe area and compute zeros up to the point where the asymptotic approximation became sufficiently accurate, and switch over to the asymptotic formula for sufficiently large n.
def jinc_lobe_area(n):
n = abs(n)
assert(n < N)
integral, info = quad(jinc, jzeros[n-1], jzeros[n])
return 2*integral if n == 0 else integral
Note that the 0th element of the array returned by jn_zeros is the first positive zero of J1; it doesn’t include the zero at the origin.
For both sinc and jinc, the even numbered lobes have positive area and the odd numbered lobes have negative area. Here’s a plot of the absolute values of the lobe areas.
## Asymptotic results
We can approximate the area of the nth lobe of the sinc function by using a midpoint approximation for 1/x. It works out that the area is asymptotically equal to
We can do a similar calculation for the area of the nth jinc lobe, starting with the asymptotic approximation for jinc given here. We find that the area of the nth lobe of the jinc function is asymptotically equal to
To get an idea of the accuracy of the asymptotic approximations, here are the results for n=100.
sinc area: 0.00633455
asymptotic: 0.00633452
absolute error: 2.97e-8
relative error: 4.69e-6
jinc area: 0.000283391
asymptotic: 0.000283385
absolute error: 5.66e-9
relative error: 2.00e-5
## More signal processing posts
[1] Some authors define sinc(x) as sin(πx)/πx. Both definitions are common.
[2] Scipy has a sinc function in scipy.special, defined as sin(πx)/πx, but it doesn’t have a jinc function.
|
2020-09-18 08:39:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469709157943726, "perplexity": 981.8542791611692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00064.warc.gz"}
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1