Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
17
1.81k
text
stringlengths
100
950k
date
stringlengths
19
19
metadata
stringlengths
1.07k
1.1k
http://mathhelpforum.com/differential-geometry/144601-continuity-print.html
# Continuity • May 13th 2010, 03:07 PM janae77 Continuity Let f be defined and continuous on a closed set S in R. Let A={x: x $\in$S and f(x)=0}. Prove that A is a closed subset of R . • May 13th 2010, 03:38 PM Plato Quote: Originally Posted by janae77 Let f be defined and continuous on a closed set S in R. Let A={x: x $\in$S and f(x)=0}. Prove that A is a closed subset of R . Hint: If $f$ is continues and $f(p)\not=0$ then there is an open interval such that $p\in (s,t)$ and $f$ is non-zero on $(s,t)$. Hence, does this show the complement open?
2018-02-21 19:59:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218780994415283, "perplexity": 730.1085355867667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00290.warc.gz"}
http://tex.stackexchange.com/questions/13078/defining-optional-data-in-a-class
# Defining (optional) data in a class I'm trying to learn how to write latex classes and am using my resume as a toy example. I am trying to separate style from content as much as possible so I am trying to define data fields such as \name \address \university, similar to those that I have seen in \maketitle. Some of the fields of each of these are to be optional. I have a working example but since this is my first attempt at writing a latex class, I wanted to ask how I should be defining the class's metadata. My attempt so far is like so (optional second line for the address): \RequirePackage{xkeyval} % for keyval } } } } } \newcommand{\makeCV}{% \fi } This works okay for me, I can type \address { %second_line=my county, town=my city, postcode=my post code } \makeCV but it the class seems a little verbose? In particular, having to define all those \newcommand{...}{} and all the various \newif 's seems a little verbose. My question is how should I be doing this task properly? - In what sense are these meta data? –  Seamus Mar 9 '11 at 18:35 @Seamus Thats what I thought data in a class was called... (have taken the meta out of the title) –  Tom Mar 9 '11 at 18:35 as it stands, I'm not sure it's clear what you're asking. "How should I do this properly?" is a little unclear. Could you try and sharpen up what you want from answers here? –  Seamus Mar 9 '11 at 19:01 @Seamus You are right, I want to know what is usually done when someone defines \name \address etc in a package. I couldn't find a guide that told me so I had a go, but I feel that I have almost certainly done it badly. –  Tom Mar 9 '11 at 19:11 You can cut out a lot of the verbosity of the coding part using the keycommand package. But I wouldn't worry too much about the verbosity of coding, but rather about the author interface you are presenting to your potential users, which is verbose. From my experience with users they prefer (environments) and simple commands. I would reserve the key-val pairs for mostly switches such as including a photo or not. Certainly the address lines do not belong in the key-val portion of the command. An interface as shown below, \usepackage[foto=none]{sCV} \begin{CV} \end{CV} would be easier to code and use. Using an environment would also make it easier for coding something that is going to span potentially over many pages. - "program to an interface, not an implementation." –  Matthew Leingang Mar 9 '11 at 21:01 I write this directly without a try so be careful: NPK for new package better is to use for example letters of the name for your package. mcv makeCV. With \define@boolkey you don't need \newif. \ifNPK@mcv@LineTwois automatically created. \presetkeys is to give default values and \setkeys[NPK]{mcv}{#1} is to apply the options inside your macro. Without a try, perhaps i make some typos :( Now I prefer to use pgfkeysif you want the same things it's possible but perhaps it's more verbose \define@boolkey [NPK] {mcv} {LineTwo}[true]{} \define@cmdkey [NPK] {mcv} {firstline}{} \define@cmdkey [NPK] {mcv} {secondline}{} \define@cmdkey [NPK] {mcv} {town}{} \define@cmdkey [NPK] {mcv} {postcode}{} \presetkeys [NPK] {mcv} {LineTwo = false, firstline = {}, secondline = {}, town = {},% Paris postcode = {}}{} % 75005 \newcommand{\makeCV}[1][]{% \setkeys[NPK]{mcv}{#1} \cmdNPK@mcv@firstline\\% \ifNPK@mcv@LineTwo \cmdNPK@mcv@secondline\\% \fi \cmdNPK@mcv@town\\% \cmdNPK@mcv@postcode% } - I fixed a ' that should have been a ` Hope you don't mind. –  Seamus Mar 9 '11 at 19:05
2015-05-27 05:57:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019410967826843, "perplexity": 1966.9893244144564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928907.65/warc/CC-MAIN-20150521113208-00046-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.tutorvista.com/content/math/probability-terms/
Top # Probability Terms Probability deals with uncertainty. In mathematics this process is called as probability. In our daily life, while speaking we use words like likely, possibly, probably. For example: • Probably, I may go to a movie. • He is likely to get  the prize. • Possibly, Kam may be leave today. What does the word likely, possibly, probably convey? It conveys that the event that we take under consideration may happen or may not happen. It is the case of uncertainty. Introduction to probability and probability terms: The meaning of the word probable in dictionary says “likely but not certain”. So we could say probability as an index which numerically measures the degree of certainty or degree of uncertainty in the occurrence of events. To learn about the definition of probability, it is very essential to know about the terms involved in probability. Terms in Probability: Experiment: An activity which results in a well defined outcome is called an experiment. Random experiment: An experiment in which all possible outcomes are known in advance, but the exact result of any trial cannot be surely predicted is called random experiment. Tossing a coin, throwing a die, picking a ball from a bag of balls are examples for random experiment Trial: Performing an experiment once is called a trial Events: The possible outcomes of a trial are called events. Equally likely events: If the different outcomes of a trial have equal chance of occurring, then outcomes are said to be equally likely. For example: when we throw a dice once, the chances of 1, 2, 3, 4, 5, 6 occurring is the same. So they are equally likely to appear. Sample space: The set of all possible outcomes of an experiment, constitute its sample space. Dependent Events: Occurrence of one event does have an effect on the probability of second event. Independent Events: Occurrence of one event has no effect on the probability of second event. Outcome: Each result of a trial is called outcome. Types of probability There are two types of probability. They are theoretical Probability and Experimental or Empirical probability Theoretical probability: The mathematical chance of occurrence of an event according to the law. Probability = $\frac{Number\ of\ outcomes\ favourable\ to\ an\ event}{Total\ number\ of\ possible\ outcomes}$ Experimental probability: When the number of cases favorable to an event is found out experimentally and then the probability is calculated, that is called experimental probability of an event. Related Calculators Combine Like Terms Calculation of Probability Binomial Distribution Probability Calculator Binomial Probability Calculator *AP and SAT are registered trademarks of the College Board.
2019-09-23 05:57:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.621959924697876, "perplexity": 783.3901201465807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00499.warc.gz"}
https://www.semanticscholar.org/paper/Tableaux-on-k%2B1-cores%2C-reduced-words-for-affine-and-Lapointe-Morse/5e1d97ff57bd0b6cc1b83a88da608357d251dcd3
Tableaux on k+1-cores, reduced words for affine permutations, and k-Schur expansions @article{Lapointe2005TableauxOK, title={Tableaux on k+1-cores, reduced words for affine permutations, and k-Schur expansions}, author={Luc Lapointe and Jennifer Morse}, journal={J. Comb. Theory, Ser. A}, year={2005}, volume={112}, pages={44-81} } • Published 19 February 2004 • Mathematics • J. Comb. Theory, Ser. A Figures from this paper Combinatorics of l , 0)-JM partitions,l -cores, the ladder crystal and the finite Hecke algebra The following thesis contains results on the combinatorial representation theory of the finite Hecke algebra $H_n(q)$. In Chapter 2 simple combinatorial descriptions are given which determine when QUANTUM COHOMOLOGY AND THE k-SCHUR BASIS • Mathematics • 2007 We prove that structure constants related to Hecke algebras at roots of unity are special cases of k-Littlewood-Richardson coefficients associated to a product of k-Schur functions. As a consequence, Order Ideals in Weak Subposets of Young’s Lattice and Associated Unimodality Conjectures • Mathematics • 2004 AbstractThe k-Young lattice Yk is a weak subposet of the Young lattice containing partitions whose first part is bounded by an integer k > 0. The Yk poset was introduced in connection with Operators on k-tableaux and the k-Littlewood-Richardson rule for a special case This thesis proves a special case of the $k$-Littlewood--Richardson rule, which is analogous to the classical Littlewood--Richardson rule but is used in the case for $k$-Schur functions. The K-theory Schubert calculus of the affine Grassmannian • Mathematics Compositio Mathematica • 2010 Abstract We construct the Schubert basis of the torus-equivariant K-homology of the affine Grassmannian of a simple algebraic group G, using the K-theoretic NilHecke ring of Kostant and Kumar. This A Note on Embedding Hypertrees Bohman, Frieze, and Mubayi's problem is solved, proving the tight result that $\chi > t$ is sufficient to embed any $r$-tree with t edges. Quantum cohomology of G/P and homology of affine Grassmannian • Mathematics • 2007 Let G be a simple and simply-connected complex algebraic group, P ⊂ G a parabolic subgroup. We prove an unpublished result of D. Peterson which states that the quantum cohomology QH*(G/P) of a flag Affine Insertion and Pieri Rules for the Affine Grassmannian • Mathematics • 2006 We study combinatorial aspects of the Schubert calculus of the affine Grassmannian Gr associated with SL(n,C). Our main results are: 1) Pieri rules for the Schubert bases of H^*(Gr) and H_*(Gr), College of Arts and Sciences Quantum Cohomology and the K-schur Basis • Mathematics • 2005 The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, References SHOWING 1-10 OF 23 REFERENCES Ordering the Affine Symmetric Group We review several descriptions of the affine symmetric group. We explicit the basis of its Bruhat order. Tableau atoms and a new Macdonald positivity conjecture Duke Math J • Engineering • 2000 A snap action fluid control valve, the operation of which is controlled by a relatively slow acting thermally responsive actuator member. The valve of this invention is particularly adapted for use Crystal base for the basic representation of $$U_q (\widehat{\mathfrak{s}\mathfrak{l}}(n))$$ • Mathematics • 1990 AbstractWe show the existence of the crystal base for the basic representation of $$U_q (\widehat{\mathfrak{s}\mathfrak{l}}(n))$$ by giving an explicit description in terms of Young diagrams. Algebraic Combinatorics And Quantum Groups * Uno's Conjecture on Representation Types of Hecke Algebras (S Ariki) * Quiver Varieties, Afine Lie Algebras, Algebras of BPS States, and Semicanonical Basis (I Frenkel et al.) * Divided Differences Crystal base for the basic representation of • Mathematics • 1990 We show the existence of the crystal base for the basic representation of Uq(~^l(n)) by giving an explicit description in terms of Young diagrams. Young Tableaux: With Applications to Representation Theory and Geometry Part I. Calculus Of Tableux: 1. Bumping and sliding 2. Words: the plactic monoid 3. Increasing sequences: proofs of the claims 4. The Robinson-Schensted-Knuth Correspondence 5. The Upper Bounds in Affine Weyl Groups under the Weak Order It is determined that the question of which pairs of elements of W have upper bounds can be reduced to the analogous question within a particular finite subposet within an affine Weyl group W0.
2022-09-27 14:45:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6162826418876648, "perplexity": 2302.0686311194013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00176.warc.gz"}
http://mathhelpforum.com/calculus/83762-exponential-fourier-series-expansion-print.html
# Exponential Fourier-series expansion • April 14th 2009, 04:33 PM tiki_master Exponential Fourier-series expansion I need help in determining the exponential fourier series expansion for the half-wave rectified signal x(t)=cos(t). I am trying to find Xn, and have determined for the case where n=0, Xn=1/pi...but I'm having trouble finding the general case for just Xn. Any help would be appreciated. • May 23rd 2009, 06:48 PM Media_Man Fourier Series $f(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty a_n\cos(nx)+\sum_{n=1}^\infty b_n\sin(nx)$ $a_0=\frac{1}{\pi}\int_{-\pi}^\pi f(x)dx$ $a_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos(nx)dx$ $b_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\sin(nx)dx$ Look very, very carefully at the function you are trying to expand here. $x(t)=cos(t)$, therefore $a_0=0$, $a_1=1$, $a_n=0$ for all $n>1$, and $b_n=0$ for all $n$.
2016-08-30 19:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168185353279114, "perplexity": 874.7091366192794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983001995.81/warc/CC-MAIN-20160823201001-00202-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.experts-exchange.com/questions/24547717/Change-Folder-Permissions-in-Bulk.html
Solved # Change Folder Permissions in Bulk Posted on 2009-07-06 537 Views Hello. Currently I am in a W2K3 Enviroment. On one of our shared drives we have a list of Jobs, each which have a folder called COST.  I need to reset permissions on that COST folder, but can't push it down via inheritance.  Is there any way by a gui tool or script I can make those changes. An Example of the structure is: \\root\JobFiles\JobNumber1\Cost Therefor I would like to script something that rests the permissions for these folders \\root\JobFiles\JobNumber1\Cost \\root\JobFiles\JobNumber2\Cost \\root\JobFiles\JobNumber3\Cost and so forth. 0 Question by:AaronIT [X] ###### Welcome to Experts Exchange Add your voice to the tech community where 5M+ people just like you are talking about what matters. • Help others & share knowledge • Earn cash & points LVL 38 Expert Comment ID: 24788900 You could use the cacls.exe command line tool and then script the changes that you need to make.  You can get the applicable switches for the command by running cacls /? at the command line, if you're not familiar with it. You'd still have to identify each folder individually in the script, but at least you could cut and paste the folder paths. There's also an enhanced tool for Win2K3 SP2, which I've never used: http://support.microsoft.com/kb/919240 0 LVL 2 Author Comment ID: 24788947 So keeping with my example above... What would my command be cacls \\root\JobFiles\JobNumber1\Cost /T How do I set it to inherit? Can I also add a group vice a user? 0 LVL 85 Accepted Solution oBdA earned 250 total points ID: 24789010 Well, this would be a lot easier to answer if you'd say which permissions those Cost folders should have ... The example below would add *C*hange permissions to all Cost subfolders, leaving the current permissions intact. Simply enter the following command in a command line. You can do that safely, it's in test mode and will only echo the cacls commands it would otherwise run: for /d %a in ("\\root\JobFiles\*.*") do @ECHO cacls.exe "%a\Cost" /t /e /g:YOURDOMAIN\SomeGroup:C To run it for real, you'd need to leave out the @ECHO. 0 LVL 1 Assisted Solution vixtro earned 250 total points ID: 24839782 Agreeing with oBdA - it'd be a lot easier if you could say what you want the permissions on the folders to look like after the script has run. I use xcacls.vbs from VBScript on my fileserver - I use it to go through and change permissions of subfolders en masse. You can download XCACLS.VBS from here: You'll have to play around with it a little as i'm not entirely sure exactly what end result you're after. To see the switches for xcacls.vbs, run this from the command prompt: cscript \\path\to\xcacls.vbs /? NB: Copy and paste this code into a blank notepad document, and save it with the extension ".vbs" for it to work. Set objFSO = CreateObject("Scripting.FileSystemObject") Set objShell = CreateObject("WScript.Shell") Set rootPath = "\\root\jobfiles" xcaclsPath = "path\to\xcacls.vbs" Set rootFolder = objFSO.GetFolder(rootPath) For Each subfolder in rootFolder.subfolders fldrName = subfolder.name 'Running the next two commands grant MODIFY access to the specified user modCmd = "cscript " & xcaclsPath & " " & rootPath & "\" & fldrName & "\Cost /E /G DOMAIN\username:M" objShell.Run modCmd, 1, True 'Running the next two commands turn the INHERIT flag for the folder on. inCmd = "cscript " & xcaclsPath & " " & rootPath & "\" & fldrName & "\Cost /E /I ENABLE" objShell.Run inCmd, 1, True Next 0 ## Featured Post Question has a verified solution. If you are experiencing a similar issue, please ask a related question ADCs have gained traction within the last decade, largely due to increased demand for legacy load balancing appliances to handle more advanced application delivery requirements and improve application performance.
2017-07-27 01:10:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3839748501777649, "perplexity": 7680.54388571445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426693.21/warc/CC-MAIN-20170727002123-20170727022123-00178.warc.gz"}
https://blender.stackexchange.com/questions/238155/alt-scroll-wheel-in-time-line-and-jump-back-to-start
# Alt Scroll Wheel in time line and jump back to start is there a way to customize the Alt + Mouse Wheel which works in most editors that when you reach the end frame you can jump back to start? is it something I don't see in settings or do I need a script for that and how can I do it by script if so . I'm not looking for other hotkeys to jump to start and end, I wanna use that specific hotkey and move in timeline normally but instead of passing by the end frame, jump back to start or being stuck at the start get back to end frame... like in a walk cycle animation this way you can easily see the transition between end and starting over • SHIFT-Left arrow takes you to the start and SHIFT-Right arrow to the end of the timeline. Sep 15 at 8:25 • @JohnEason yeah but for that you need to take one of your hands of the mouse or keyboard . if there was a way to do it just like I said that would be nice Sep 15 at 9:29 • can do most things in blender via a script Please clarify what you do want, not what you don't. Are we talking about scrubbing the time line with the scroll wheel and if it goes past end points reverts to the other? Sep 15 at 15:24 • @batFINGER question is very self explaining . that link you sent helped alot i just need to look up for more stuff to make the script for it Sep 15 at 15:28 Edit the keymap. As commented by @John Eason these are already mapped to SHIFT and left arrow for go to start, and right to end. Can search for keymaps by keypress Or name, if unsure hover over the button that does it Once found change to suit. Here I've altered it to ALT + middle mouse click. Save User Preferences to make change permanent. • i know i can change the hotkey for that but what im saying is to change the behavior of a specific hotkey when something happens. it is a small change but i guess that way its easier for something like a walk cycle . so this wont be the answer Sep 15 at 14:55 • Clarify any details by editing it into your question. Do you want to match the frame range to current objects action range rather than scene. There are answers relating to that question. Sep 15 at 14:58 • yeah maybe that would work Sep 15 at 15:05 • blender.stackexchange.com/questions/27889/… Sep 15 at 15:13 ### Why Even Do this there are several options to monitor the transitions in frames but in my experience the best way is to use the mouse scroll wheel since you have more control over speed and its easy to use . It's like classic animation when animators flip back and forth paper between their fingers. but when you are making a cycle animation which is used a lot in game animations, there is one thing that is annoying and its how you cant see the transition from end frame to starting over as easy. it is a small change but I think it worth it. So here is a way to make this work basically we disable the old behavior and make a new one that does the same old behavior and just checks for the end and start frame then does the expected thing ### Disable Default Hotkeys Edit -> Preferences -> Keymap 1. set the search type to Key-Binding 2. search for "wheel" 3. scroll a bit and find Frames section 4. and disable both Frame Offsets ### How to Make The Script So to make a function and assign a hotkey to it we need an Operator which has an execute function where our logic will be and when we assign a hotkey to the operator ,when the keys are pressed(event happens) this function will be called we want to make this as an add-on ,so we add some info about it in bl_info ### First Step as you can see the operator class has some basic properties like an ID , label and more that we can fill in and some more info bl_info = { "name": "Better_Scroll_Time", "blender": (2, 80, 0), "category": "Object", } import bpy from bpy.props import * class ScrollWheelTime(bpy.types.Operator): """Sets the time in range""" # Use this as a tooltip for menu items and buttons. bl_idname = "object.scroll" # Unique identifier for buttons and menu items to reference. bl_label = "Better Time Scroll Wheel" # Display name in the interface. bl_options = {'REGISTER', 'UNDO'} # Enable undo for the operator. def execute(self, context): # execute() is called when running the operator. # logic return {'FINISHED'} # Lets Blender know the operator finished successfully. ### Execute direction: IntProperty() #outside of execute function as a class member def execute(self, context): #logic scn = context.scene current_frame = scn.frame_current current_frame+=self.direction scn.frame_set(current_frame) if not scn.frame_start <= current_frame <= scn.frame_end: scn.frame_set(scn.frame_start if self.direction>=1 else scn.frame_end) return {'FINISHED'} • so if you look at blender keymaps in Preferences you can find Operations with some properties that we can provide too, with bpy.props functions like IntProperty() • we need a reference to the scene we are in to access things like start and end frame of the scene • increase or decrease current frame with the amount of direction which will be set to +1 and -1 based on the shortcut • if current frame is not between start and end frame of the scene based on the direction we decide where to put the frame cursor • we tell blender this operation is finished ### Register addon_keymaps = [] #outside of function def register(): bpy.utils.register_class(ScrollWheelTime) wm = bpy.context.window_manager if kc: km = wm.keyconfigs.addon.keymaps.new(name = "Window",space_type='EMPTY', region_type='WINDOW') km_up = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELUPMOUSE', value='PRESS', alt=True) km_down = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELDOWNMOUSE', value='PRESS', alt=True) km_up.properties.direction = -1 # setting the default property value for wheel up km_down.properties.direction = +1 # setting the default property value for wheel down • here we register our operation class so we can use if as an addon • then we make a keymap using wm.keyconfigs.addon.keymaps.new and set its parameters to name = "Window", space_type ='EMPTY', region_type ='WINDOW' so our hotkey works in all editor windows • then we assign our shortcuts to the operation class • we can set a default value to our custom properties that defined as class member using bpy.props • we save this hotkeys so we can remove them in unregister function to make things clear in unregister function we remove our hotkeys you can see completed script here (code could have some clean ups) bl_info = { "name": "Better_Scroll_Time", "blender": (2, 80, 0), "category": "Object", } import bpy from bpy.props import * class ScrollWheelTime(bpy.types.Operator): """Sets the time in range""" # Use this as a tooltip for menu items and buttons. bl_idname = "object.scroll" # Unique identifier for buttons and menu items to reference. bl_label = "Better Time Scroll Wheel" # Display name in the interface. bl_options = {'REGISTER', 'UNDO'} # Enable undo for the operator. direction: IntProperty() def execute(self, context): # execute() is called when running the operator. # logic scn = context.scene current_frame = scn.frame_current current_frame+=self.direction scn.frame_set(current_frame) if not scn.frame_start <= current_frame <= scn.frame_end: scn.frame_set(scn.frame_start if self.direction>=1 else scn.frame_end) return {'FINISHED'} # Lets Blender know the operator finished successfully. def register(): bpy.utils.register_class(ScrollWheelTime) wm = bpy.context.window_manager if kc: km = wm.keyconfigs.addon.keymaps.new(name = "Window",space_type='EMPTY', region_type='WINDOW') km_up = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELUPMOUSE', value='PRESS', alt=True) km_down = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELDOWNMOUSE', value='PRESS', alt=True) km_up.properties.direction = -1 # setting the default property value for wheel up km_down.properties.direction = +1 # setting the default property value for wheel down print("Installed Better Scroll Time !") def unregister(): bpy.utils.unregister_class(ScrollWheelUpTime) # Remove the hotkey km.keymap_items.remove(k) • context is already passed to the operator methods so you can replace bpy.context.* by context.* for all calls/properties and when declaring a variable use it, see: pasteall.org/oUiT/raw Also consider that this is no regular forum and link-only answers are discouraged, if the link goes down so does the answer hence the downvote I guess. Sep 16 at 8:27
2021-12-02 07:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2360772341489792, "perplexity": 3468.472806140118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00226.warc.gz"}
http://mathematica.stackexchange.com/questions/8288/setting-a-variable-equal-to-the-output-of-findroot/8292
# setting a variable equal to the output of FindRoot So I set a function f[x] f[x_] := x*E^(-x) - 0.16064 Then I set a variable 'actualroot' to the function FindRoot, starting at 3 actualroot = FindRoot[ f[x], {x, 3} ] and get the output {x -> 2.88976} Later I want to compare this output with a different estimate (-2.88673) of the root, and calculate error, so I have Abs[ (actualroot - estimateroot)/actualroot ] and i get this output: Abs[ (-2.88673 + (x -> 2.88976))/(x -> 2.88976) ] How do I get mathematica to evaluate this expression? I also tried using N[] to give me a decimal evaluation but it didn't work. - You can use actualroot = FindRoot[f[x], {x, 3}][[1, 2]] –  b.gatessucks Jul 13 '12 at 20:27 The usual way to get the values of the results of FindRoot, Solve, etc., which are lists of Rule is the following: f[x_] := x E^(-x) - 0.16064 actualroot = x /. FindRoot[f[x], {x, 3}] estimateroot = -2.88673; Abs[(actualroot - estimateroot)/actualroot] Output: 2.88976 1.99895 - Thanks, new to mathematica, just diving in –  DWC Jul 13 '12 at 21:05 @DWC well, then it's probably good to know that /. is shorthand for ReplaceAll. Apart from its doc page reading this tutorial on transformation rules will prove fruitful. –  Sjoerd C. de Vries Jul 13 '12 at 21:51
2014-03-08 04:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3065386712551117, "perplexity": 4436.3842938887865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653077/warc/CC-MAIN-20140305060733-00037-ip-10-183-142-35.ec2.internal.warc.gz"}
https://elteoremadecuales.com/abels-theorem-2/?lang=pt
# Abel's theorem Abel's theorem This article is about Abel's theorem on power series. For Abel's theorem on algebraic curves, see Abel–Jacobi map. For Abel's theorem on the insolubility of the quintic equation, see Abel–Ruffini theorem. For Abel's theorem on linear differential equations, see Abel's identity. For Abel's theorem on irreducible polynomials, see Abel's irreducibility theorem. For Abel's formula for summation of a series, using an integral, see Abel's summation formula. This article includes a list of references, leitura relacionada ou links externos, mas suas fontes permanecem obscuras porque faltam citações em linha. Ajude a melhorar este artigo introduzindo citações mais precisas. (Fevereiro 2013) (Saiba como e quando remover esta mensagem de modelo) Na matemática, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel. Conteúdo 1 Teorema 2 Observações 3 Formulários 4 Esquema da prova 5 Related concepts 6 Veja também 7 Leitura adicional 8 External links Theorem Let the Taylor series {estilo de exibição G(x)=soma _{k=0}^{infty }uma_{k}x^{k}} be a power series with real coefficients {estilo de exibição a_{k}} with radius of convergence {estilo de exibição 1.} Suppose that the series {soma de estilo de exibição _{k=0}^{infty }uma_{k}} converge. Então {estilo de exibição G(x)} is continuous from the left at {displaystyle x=1,} isso é, {displaystyle lim _{xto 1^{-}}G(x)=soma _{k=0}^{infty }uma_{k}.} The same theorem holds for complex power series {estilo de exibição G(z)=soma _{k=0}^{infty }uma_{k}z^{k},} provided that {displaystyle zto 1} entirely within a single Stolz sector, isso é, a region of the open unit disk where {estilo de exibição |1-z|leq M(1-|z|)} for some fixed finite {displaystyle M>1} . Without this restriction, the limit may fail to exist: por exemplo, the power series {soma de estilo de exibição _{n>0}{fratura {z^{3^{n}}-z^{2cdot 3^{n}}}{n}}} converge para {estilo de exibição 0} no {estilo de exibição z = 1,} but is unbounded near any point of the form {estilo de exibição e^{pi i/3^{n}},} so the value at {estilo de exibição z = 1} is not the limit as {estilo de exibição com} tends to 1 in the whole open disk. Observe que {estilo de exibição G(z)} is continuous on the real closed interval {estilo de exibição [0,t]} por {estilo de exibição t<1,} by virtue of the uniform convergence of the series on compact subsets of the disk of convergence. Abel's theorem allows us to say more, namely that {displaystyle G(z)} is continuous on {displaystyle [0,1].} Remarks As an immediate consequence of this theorem, if {displaystyle z} is any nonzero complex number for which the series {displaystyle sum _{k=0}^{infty }a_{k}z^{k}} converges, then it follows that {displaystyle lim _{tto 1^{-}}G(tz)=sum _{k=0}^{infty }a_{k}z^{k}} in which the limit is taken from below. The theorem can also be generalized to account for sums which diverge to infinity.[citation needed] If {displaystyle sum _{k=0}^{infty }a_{k}=infty } then {displaystyle lim _{zto 1^{-}}G(z)to infty .} However, if the series is only known to be divergent, but for reasons other than diverging to infinity, then the claim of the theorem may fail: take, for example, the power series for {displaystyle {frac {1}{1+z}}.} At {displaystyle z=1} the series is equal to {displaystyle 1-1+1-1+cdots ,} but {displaystyle {tfrac {1}{1+1}}={tfrac {1}{2}}.} We also remark the theorem holds for radii of convergence other than {displaystyle R=1} : let {displaystyle G(x)=sum _{k=0}^{infty }a_{k}x^{k}} be a power series with radius of convergence {displaystyle R,} and suppose the series converges at {displaystyle x=R.} Then {displaystyle G(x)} is continuous from the left at {displaystyle x=R,} that is, {displaystyle lim _{xto R^{-}}G(x)=G(R).} Applications The utility of Abel's theorem is that it allows us to find the limit of a power series as its argument (that is, {displaystyle z} ) approaches {displaystyle 1} from below, even in cases where the radius of convergence, {displaystyle R,} of the power series is equal to {displaystyle 1} and we cannot be sure whether the limit should be finite or not. See for example, the binomial series. Abel's theorem allows us to evaluate many series in closed form. For example, when {displaystyle a_{k}={frac {(-1)^{k}}{k+1}},} we obtain {displaystyle G_{a}(z)={frac {ln(1+z)}{z}},qquad 0by integrating the uniformly convergent geometric power series term by term on {estilo de exibição [-z,0]} ; thus the series {soma de estilo de exibição _{k=0}^{infty }{fratura {(-1)^{k}}{k+1}}} converge para {estilo de exibição ln(2)} by Abel's theorem. De forma similar, {soma de estilo de exibição _{k=0}^{infty }{fratura {(-1)^{k}}{2k+1}}} converge para {displaystyle arctan(1)={tfrac {pi }{4}}.} {estilo de exibição G_{uma}(z)} is called the generating function of the sequence {displaystyle a.} Abel's theorem is frequently useful in dealing with generating functions of real-valued and non-negative sequences, such as probability-generating functions. Em particular, it is useful in the theory of Galton–Watson processes. Outline of proof After subtracting a constant from {estilo de exibição a_{0},} we may assume that {soma de estilo de exibição _{k=0}^{infty }uma_{k}=0.} Deixar {estilo de exibição s_{n}=soma _{k=0}^{n}uma_{k}!.} Then substituting {estilo de exibição a_{k}=s_{k}-s_{k-1}} and performing a simple manipulation of the series (summation by parts) resulta em {estilo de exibição G_{uma}(z)=(1-z)soma _{k=0}^{infty }s_{k}z^{k}.} Given {displaystyle varepsilon >0,} pick {estilo de exibição m} large enough so that {estilo de exibição |s_{k}| Se você quiser conhecer outros artigos semelhantes a Abel's theorem você pode visitar a categoria Mathematical series. Ir para cima Usamos cookies próprios e de terceiros para melhorar a experiência do usuário Mais informação
2023-04-02 12:53:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928529679775238, "perplexity": 8178.9014807282165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00006.warc.gz"}
https://www.clutchprep.com/organic-chemistry/practice-problems/15672/write-a-structural-formula-for-each-of-the-following-compounds-a-6-isopropyl-2-3
Problem: Write a structural formula for each of the following compounds:(a) 6-Isopropyl-2,3-dimethylnonane FREE Expert Solution 84% (358 ratings) Problem Details Write a structural formula for each of the following compounds: (a) 6-Isopropyl-2,3-dimethylnonane
2021-01-23 05:07:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958631694316864, "perplexity": 8777.451071631454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00622.warc.gz"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.13/h/prince1.html
SEARCH HOME Math Central Quandaries & Queries Question from Prince, a student: What is the exponential form of 1/square root of 6v? Hi, There are two uses of exponents you need to use here. The first is fractional exponents. For example $x^{1/2} = \sqrt{x}$ and $x^{1/3} = \sqrt[3]{x}$ or in general if $p$ is a positive integer then $x^{1/p} = \sqrt[p]{x}.$ The other use of exponents is negative exponents, $x^{-y} = \frac{1}{x^y}.$ Can you complete your problem now? Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
2017-11-19 14:18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4154507517814636, "perplexity": 568.2080352239942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00699.warc.gz"}
https://www.thejournal.club/c/paper/103883/
#### Computing Equilibria with Partial Commitment ##### Vincent Conitzer In security games, the solution concept commonly used is that of a Stackelberg equilibrium where the defender gets to commit to a mixed strategy. The motivation for this is that the attacker can repeatedly observe the defender's actions and learn her distribution over actions, before acting himself. If the actions were not observable, Nash (or perhaps correlated) equilibrium would arguably be a more natural solution concept. But what if some, but not all, aspects of the defender's actions are observable? In this paper, we introduce solution concepts corresponding to this case, both with and without correlation. We study their basic properties, whether these solutions can be efficiently computed, and the impact of additional observability on the utility obtained. arrow_drop_up
2021-12-02 10:30:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268622159957886, "perplexity": 857.2927488424104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00361.warc.gz"}
https://socratic.org/questions/590be7d311ef6b2c97f9260a#419237
# When a bottle of wine is opened, its hydrogen ion concentration, [H_3O^+]=4.1xx10^-4*mol*L^-1; what is its pH? How does the pH evolve after the wine is left to stand? May 7, 2017 $p H = - {\log}_{10} \left[{H}_{3} {O}^{+}\right]$ #### Explanation: And thus............when freshly opened $p H = - {\log}_{10} \left(4.1 \times {10}^{-} 4\right) = 3.39$. (And in fact most wines have a $p H$ around this level.) See here for more detail on the definition of $p H$. And later [H_3O^+]=0.0023*mol*L^-1; pH=2.64 The wine must not have been very good, because most wine is consumed within 12 hours after opening. What likely occurred is that the ethyl alcohol air-oxidized up to acetic acid, which is a carboxylic acid, and thus likely to have a lower $p H$ in aqueous solution. (Note that this oxidation is why we seal wine bottles with an air-tight cork/cap). For the oxidation of ethyl alcohol to acetic acid we could write the equation: $\text{H"_3"CCH"_2"OH(aq)" +"O"_2"(g)" rarr "H"_3"CCO"_2"H(aq)"+"H"_2"O(l)}$
2021-09-22 13:51:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072762489318848, "perplexity": 3471.6090485168083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00367.warc.gz"}
https://www.genetics.org/content/178/4/2169?ijkey=0c63ab5c8052670408ef7a9cf93547d14ab17bd0&keytype2=tf_ipsecsha
The Effects of Recombination Rate on the Distribution and Abundance of Transposable Elements | Genetics
2021-06-25 01:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2540581226348877, "perplexity": 2724.989208879195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00173.warc.gz"}
https://www.ytiancompbio.com/publications/dengue-review/
# Human T cell response to dengue virus infection Contents ## Authors Yuan Tian, Alba Grifoni, Alessandro Sette, Daniela Weiskopf ## Journal Trends in immunology 37 (8), 557-568 ## Abstract DENV is a major public health problem worldwide, thus underlining the overall significance of the proposed Program. The four dengue virus (DENV) serotypes (1-4) cause the most common mosquito-borne viral disease of humans, with 3 billion people at risk for infection and up to 100 million cases each year, most often affecting children. The protective role of T cells during viral infection is well established. Generally, CD8 T cells can control viral infection through several mechanisms, including direct cytotoxicity and production of pro-inflammatory cytokines such as IFN- and TNF-. Similarly, CD4 T cells are thought to control viral infection through multiple mechanisms, including enhancement of B and CD8 T cell responses, production of inflammatory and anti-viral cytokines, cytotoxicity, and promotion of memory responses. To probe the phenotype of virus-specific T cells, epitopes derived from viral sequences need to be known. Here we discuss the identification of CD4 and CD8 T cell epitopes derived from DENV and how these epitopes have been used by researchers to interrogate the phenotype and function of DENV-specific T cell populations.
2023-03-24 00:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.24376802146434784, "perplexity": 9677.930693957733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00709.warc.gz"}
http://mathoverflow.net/questions/121915/linear-numeration-systems
# Linear numeration systems Let $F_{i}$ be the fibonacci or a multinacci sequence. The number of representations of $N$ in the form $N=\sum_{i=0}^{k}s_{i}F_{i}, s_{i}\in${0,1} is known. My question is what is known about sequence-based numeration systems given by other linear recurrences. To make the question precise, i am interested in the recurrence $G_{i+4}=G_{i+3}+G_{i+2}+G_{i+1}-G_{i}$ with $G_{0}=1$, $G_{1}=2$, $G_{2}=4$, $G(3)=8$. What is known about $\sharp_{G} N:=${$(s_{0},\dots,s_{k})\in${0,1}$^{k+1}|N=\sum_{i=0}^{k}s_{i}G_{i}$}? - ## 1 Answer Some results on the quantity in question can be found in J. M. Dumont, N. Sidorov and A. Thomas, Number of representations related to a linear recurrent basis, Acta Arithmetica 88 (1999), 371-394. We are mainly interested in the summatory function but there are also some upper bounds for the quantity itself. Our main assumption is that the corresponding root (of $x^4=x^3+x^2+x-1$ in your case) is a Perron number (in your example it's even Salem, so our results apply). - Many thanks for the reference. Best 9i –  Jörg Neunhäuserer Feb 16 '13 at 15:39 No problem. Hope it'll help. –  Nikita Sidorov Feb 16 '13 at 22:41
2014-11-27 23:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126301765441895, "perplexity": 397.64678656531294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009292.37/warc/CC-MAIN-20141125155649-00038-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.mathway.com/glossary/definition/35/axis-of-symmetry
axis of symmetry A line that passes through a figure in such a way that the part of the figure on one side of the line is a mirror reflection of the part on the other side of the line. We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%)
2018-02-22 09:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2769356667995453, "perplexity": 1253.2776964738455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814079.59/warc/CC-MAIN-20180222081525-20180222101525-00364.warc.gz"}
https://physics.stackexchange.com/tags/conductors/hot?filter=month
# Tag Info 5 They do indeed repel each other. But they are repelled from the point they are coming from even stronger. Imagine having two charged metal balls where one has half the charge of the other. When you connect them with a wire, will charges flow? Yes. Sure, each individual electron feels a strong repulsion from both of the balls, since there already is an ... 4 Electrons do repel each other but they also like to spread out. Quantum mechanics tells us that it costs a lot of energy to localize an electron in a small volume. These two tendencies compete. The quantum mechanical Hubbard model is based on these two effects. It has two parameters: on-site repulsion and transfer energy (transfer Hamiltonian matrix element).... 3 The drag is due to repulsion caused by eddy currents induced by the moving magnetic field in the Aluminum metal. The repulsive force opposes the motion of the metal ball according to Faraday's second law of electromagnetism. The same thing will happen if you replace the aluminum with copper metal. 3 If I take cross section close to beggining of the conductor, charges which start moving on one end don't experience as many collisions when they get to that cross section close to the beggining as they will when they come to the other end of a conductor. It seems that resistance should increase from one towards the other end of an conductor. You seem to ... 1 Inside the cavity we have placed $+q$ charge. Due to the electric field of the $+q$ charge in the cavity (radiating outwards), the free electron gets drifted towards the inside surface of cavity (opposite to the radial outward direction of positive charge in the cavity). As a result, the inside surface of cavity gets negative charge an outer surface of ... 1 Yes, there is a difference. If you made the wire as you mentioned, into a spiral, like this: then there is quite a big difference between this and a straight wire. The difference between a straight wire and a coil or spiral wire is that the spiral wire resists changes in current flow. This is called an inductor or solenoid. It resists changes in the ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-05-07 21:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6760057210922241, "perplexity": 302.0438957446246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00258.warc.gz"}
https://www.tutorke.com/lesson/342-the-acceleration-of-a-body-moving-along-a-straight-line-is-4-t-ms-2-and-its-velocity-is.aspx
Get premium membership and access revision papers with marking schemes, video lessons and live classes. OR # Differentiation and Its Applications Questions and Answers The acceleration of a body moving along a straight line is (4-t) ms^2 and its velocity is v m/s after t seconds. a).i) If the initial velocity of the body is 3m/s, express the velocity v in terms of t. ii).Find the velocity of the body after 2 seconds. b) Calculate: i).The time taken to attain maximum velocity. ii).The distance covered by the body to attain the maximum velocity. (9m 46s) 1203 Views     SHARE |
2023-01-31 05:59:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702702462673187, "perplexity": 1969.1600026876624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00189.warc.gz"}
https://data.lesslikely.com/concurve/articles/examples.html
## Introduction Here I show how to produce P-value, S-value, likelihood, and deviance functions with the concurve package using fake data and data from real studies. Simply put, these functions are rich sources of information for scientific inference and the image below, taken from Xie & Singh, 20131 displays why. For a more extensive discussion of these concepts, see the following references.113 # Simple Models To get started, we could generate some normal data and combine two vectors in a dataframe library(concurve) set.seed(1031) GroupA <- rnorm(500) GroupB <- rnorm(500) RandomData <- data.frame(GroupA, GroupB) and look at the differences between the two vectors. We’ll plug these vectors and the dataframe they’re in inside of the curve_mean() function. Here, the default method involves calculating CIs using the Wald method. intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData, method = "default" ) Each of the functions within concurve will generally produce a list with three items, and the first will usually contain the function of interest. tibble::tibble(intervalsdf[[1]]) #> # A tibble: 10,000 x 1 #> intervalsdf[[1… $upper.limit$intrvl.width $intrvl.level$cdf $pvalue #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 -0.113 -0.113 0 0 0.5 1 #> 2 -0.113 -0.113 0.0000154 0.0001 0.500 1.00 #> 3 -0.113 -0.113 0.0000309 0.0002 0.500 1.00 #> 4 -0.113 -0.113 0.0000463 0.000300 0.500 1.00 #> 5 -0.113 -0.113 0.0000617 0.0004 0.500 1.00 #> 6 -0.113 -0.113 0.0000772 0.0005 0.500 1.00 #> 7 -0.113 -0.113 0.0000926 0.000600 0.500 0.999 #> 8 -0.113 -0.113 0.000108 0.0007 0.500 0.999 #> 9 -0.113 -0.112 0.000123 0.0008 0.500 0.999 #> 10 -0.113 -0.112 0.000139 0.0009 0.500 0.999 #> # … with 9,990 more rows, and 1 more variable:$svalue <dbl> We can view the function using the ggcurve() function. The two basic arguments that must be provided are the data argument and the “type” argument. To plot a consonance function, we would write “c”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "c", nullvalue = TRUE)) We can see that the consonance “curve” is every interval estimate plotted, and provides the P-values, CIs, along with the median unbiased estimate It can be defined as such, $C V_{n}(\theta)=1-2\left|H_{n}(\theta)-0.5\right|=2 \min \left\{H_{n}(\theta), 1-H_{n}(\theta)\right\}$ Its information counterpart, the surprisal function, can be constructed by taking the $$-log_{2}$$ of the P-value.3,14,15 To view the surprisal function, we simply change the type to “s”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "s")) We can also view the consonance distribution by changing the type to “cdf”, which is a cumulative probability distribution. The point at which the curve reaches 50% is known as the “median unbiased estimate”. It is the same estimate that is typically at the peak of the P-value curve from above. (function1s <- ggcurve(data = intervalsdf[[2]], type = "cdf", nullvalue = TRUE)) We can also get relevant statistics that show the range of values by using the curve_table() function. There are several formats that can be exported such as .docx, .ppt, and TeX. (x <- curve_table(data = intervalsdf[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -0.132 -0.093 0.039 25.0 0.625 0.750 0.415 -0.154 -0.071 0.083 50.0 0.750 0.500 1.000 -0.183 -0.042 0.142 75.0 0.875 0.250 2.000 -0.192 -0.034 0.158 80.0 0.900 0.200 2.322 -0.201 -0.024 0.177 85.0 0.925 0.150 2.737 -0.214 -0.011 0.203 90.0 0.950 0.100 3.322 -0.233 0.008 0.242 95.0 0.975 0.050 4.322 -0.251 0.026 0.276 97.5 0.988 0.025 5.322 -0.271 0.046 0.318 99.0 0.995 0.010 6.644 # Comparing Functions If we wanted to compare two studies to see the amount of “consonance”, we could use the curve_compare() function to get a numerical output. First, we generate some more fake data GroupA2 <- rnorm(500) GroupB2 <- rnorm(500) RandomData2 <- data.frame(GroupA2, GroupB2) model <- lm(GroupA2 ~ GroupB2, data = RandomData2) randomframe <- curve_gen(model, "GroupB2") Once again, we’ll plot this data with ggcurve(). We can also indicate whether we want certain interval estimates to be plotted in the function with the “levels” argument. If we wanted to plot the 50%, 75%, and 95% intervals, we’d provide the argument this way: (function2 <- ggcurve(type = "c", randomframe[[1]], levels = c(0.50, 0.75, 0.95), nullvalue = TRUE)) Now that we have two datasets and two functions, we can compare them using the curve_compare() function. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "c", plot = TRUE, measure = "default", nullvalue = TRUE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 0.098 0.073 0.024 16.309 0.195 #> #> [[2]] This function will provide us with the area that is shared between the curve, along with a ratio of overlap to non-overlap. We can also do this for the surprisal function simply by changing type to “s”. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "s", plot = TRUE, measure = "default", nullvalue = FALSE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 3.947 1.531 1.531 38.801 0.634 #> #> [[2]] It’s clear that the outputs have changed and indicate far more overlap than before. # Survival Modeling Here, we’ll look at how to create consonance functions from the coefficients of predictors of interest in a Cox regression model. We’ll use the carData package for this. Fox & Weisberg, 2018 describe the dataset elegantly in their paper, The Rossi data set in the carData package contains data from an experimental study of recidivism of 432 male prisoners, who were observed for a year after being released from prison (Rossi et al., 1980). The following variables are included in the data; the variable names are those used by Allison (1995), from whom this example and variable descriptions are adapted: week: week of first arrest after release, or censoring time. arrest: the event indicator, equal to 1 for those arrested during the period of the study and 0 for those who were not arrested. fin: a factor, with levels “yes” if the individual received financial aid after release from prison, and “no” if he did not; financial aid was a randomly assigned factor manipulated by the researchers. age: in years at the time of release. race: a factor with levels “black” and “other”. wexp: a factor with levels “yes” if the individual had full-time work experience prior to incarceration and “no” if he did not. mar: a factor with levels “married” if the individual was married at the time of release and “not married” if he was not. paro: a factor coded “yes” if the individual was released on parole and “no” if he was not. prio: number of prior convictions. educ: education, a categorical variable coded numerically, with codes 2 (grade 6 or less), 3 (grades 6 through 9), 4 (grades 10 and 11), 5 (grade 12), or 6 (some post-secondary). emp1–emp52: factors coded “yes” if the individual was employed in the corresponding week of the study and “no” otherwise. We read the data file into a data frame, and print the first few cases (omitting the variables emp1 – emp52, which are in columns 11–62 of the data frame): library(carData) Rossi[1:5, 1:10] #> week arrest fin age race wexp mar paro prio educ #> 1 20 1 no 27 black no not married yes 3 3 #> 2 17 1 no 18 black no not married yes 8 4 #> 3 25 1 no 19 other yes not married yes 13 3 #> 4 52 0 yes 23 black yes married yes 1 5 #> 5 52 0 no 19 other yes not married yes 3 3 Thus, for example, the first individual was arrested in week 20 of the study, while the fourth individual was never rearrested, and hence has a censoring time of 52. Following Allison, a Cox regression of time to rearrest on the time-constant covariates is specified as follows: library(survival) mod.allison <- coxph(Surv(week, arrest) ~ fin + age + race + wexp + mar + paro + prio, data = Rossi ) mod.allison #> Call: #> coxph(formula = Surv(week, arrest) ~ fin + age + race + wexp + #> mar + paro + prio, data = Rossi) #> #> coef exp(coef) se(coef) z p #> finyes -0.37942 0.68426 0.19138 -1.983 0.04742 #> age -0.05744 0.94418 0.02200 -2.611 0.00903 #> raceother -0.31390 0.73059 0.30799 -1.019 0.30812 #> wexpyes -0.14980 0.86088 0.21222 -0.706 0.48029 #> marnot married 0.43370 1.54296 0.38187 1.136 0.25606 #> paroyes -0.08487 0.91863 0.19576 -0.434 0.66461 #> prio 0.09150 1.09581 0.02865 3.194 0.00140 #> #> Likelihood ratio test=33.27 on 7 df, p=2.362e-05 #> n= 432, number of events= 114 Now that we have our Cox model object, we can use the curve_surv() function to create the function. If we wanted to create a function for the coefficient of prior convictions, then we’d do so like this: z <- curve_surv(mod.allison, "prio") Then we could plot our consonance curve and density and also produce a table of relevant statistics. Because we’re working with ratios, we’ll set the measure argument in ggcurve() to “ratio”. ggcurve(z[[1]], measure = "ratio", nullvalue = TRUE) ggcurve(z[[2]], type = "cd", measure = "ratio", nullvalue = TRUE) curve_table(z[[1]], format = "image") Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) 1.086 1.106 0.020 25.0 0.625 0.750 0.415 1.075 1.117 0.042 50.0 0.750 0.500 1.000 1.060 1.133 0.072 75.0 0.875 0.250 2.000 1.056 1.137 0.080 80.0 0.900 0.200 2.322 1.052 1.142 0.090 85.0 0.925 0.150 2.737 1.045 1.149 0.103 90.0 0.950 0.100 3.322 1.036 1.159 0.123 95.0 0.975 0.050 4.322 1.028 1.168 0.141 97.5 0.988 0.025 5.322 1.018 1.180 0.162 99.0 0.995 0.010 6.644 We could also construct a function for another predictor such as age x <- curve_surv(mod.allison, "age") ggcurve(x[[1]], measure = "ratio") ggcurve(x[[2]], type = "cd", measure = "ratio") curve_table(x[[1]], format = "image") Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) 0.938 0.951 0.013 25.0 0.625 0.750 0.415 0.930 0.958 0.028 50.0 0.750 0.500 1.000 0.921 0.968 0.048 75.0 0.875 0.250 2.000 0.918 0.971 0.053 80.0 0.900 0.200 2.322 0.915 0.975 0.060 85.0 0.925 0.150 2.737 0.911 0.979 0.068 90.0 0.950 0.100 3.322 0.904 0.986 0.081 95.0 0.975 0.050 4.322 0.899 0.992 0.093 97.5 0.988 0.025 5.322 0.892 0.999 0.107 99.0 0.995 0.010 6.644 That’s a very quick look at creating functions from Cox regression models. # Meta-Analysis Here, we’ll use an example dataset taken from the metafor website, which also comes preloaded with the metafor package. library(metafor) #> Loading 'metafor' package (version 2.1-0). For an overview #> and introduction to the package please type: help(metafor). dat.hine1989 #> study source n1i n2i ai ci #> 1 1 Chopra et al. 39 43 2 1 #> 2 2 Mogensen 44 44 4 4 #> 3 3 Pitt et al. 107 110 6 4 #> 4 4 Darby et al. 103 100 7 5 #> 5 5 Bennett et al. 110 106 7 3 #> 6 6 O'Brien et al. 154 146 11 4 I will quote Wolfgang here, since he explains it best, "As described under help(dat.hine1989), variables n1i and n2i are the number of patients in the lidocaine and control group, respectively, and ai and ci are the corresponding number of deaths in the two groups. Since these are 2×2 table data, a variety of different outcome measures could be used for the meta-analysis, including the risk difference, the risk ratio (relative risk), and the odds ratio (see Table III). Normand (1999) uses risk differences for the meta-analysis, so we will proceed accordingly. We can calculate the risk differences and corresponding sampling variances with: dat <- escalc(measure = "RD", n1i = n1i, n2i = n2i, ai = ai, ci = ci, data = dat.hine1989) dat #> study source n1i n2i ai ci yi vi #> 1 1 Chopra et al. 39 43 2 1 0.0280 0.0018 #> 2 2 Mogensen 44 44 4 4 0.0000 0.0038 #> 3 3 Pitt et al. 107 110 6 4 0.0197 0.0008 #> 4 4 Darby et al. 103 100 7 5 0.0180 0.0011 #> 5 5 Bennett et al. 110 106 7 3 0.0353 0.0008 #> 6 6 O'Brien et al. 154 146 11 4 0.0440 0.0006 "Note that the yi values are the risk differences in terms of proportions. Since Normand (1999) provides the results in terms of percentages, we can make the results directly comparable by multiplying the risk differences by 100 (and the sampling variances by $$100^{2}$$): dat$yi <- dat$yi * 100 dat$vi <- dat$vi * 100^2 We can fit a fixed-effects model with the following fe <- rma(yi, vi, data = dat, method = "FE") Now that we have our metafor object, we can compute the consonance function using the curve_meta() function. fecurve <- curve_meta(fe) Now we can graph our function. ggcurve(fecurve[[1]], nullvalue = TRUE) We used a fixed-effects model here, but if we wanted to use a random-effects model, we could do so with the following, which will use a restricted maximum likelihood estimator for the random-effects model re <- rma(yi, vi, data = dat, method = "REML") And then we could use curve_meta() to get the relevant list recurve <- curve_meta(re) Now we can plot our object. ggcurve(recurve[[1]], nullvalue = TRUE) We could also compare our two models to see how much consonance/overlap there is curve_compare(fecurve[[1]], recurve[[1]], plot = TRUE) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 2.085 2.085 2.085 100 Inf #> #> [[2]] The results are practically the same and we cannot actually see any difference, and the AUC % overlap also indicates this. # Constructing Functions From Single Intervals We can also take a set of confidence limits and use them to construct a consonance, surprisal, likelihood or deviance function using the curve_rev() function. This method is computed from the approximate normal distribution. Here, we’ll use two epidemiological studies16,17 that studied the impact of SSRI exposure in pregnant mothers, and the rate of autism in children. Both of these studies suggested a null effect of SSRI exposure on autism rates in children. curve1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "c", measure = "ratio", steps = 10000) (ggcurve(data = curve1[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) curve2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "c", measure = "ratio", steps = 10000) (ggcurve(data = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) The null value is shown via the red line and it’s clear that a large mass of the function is away from it. We can also see this by plotting the likelihood functions via the curve_rev() function. lik1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "l", measure = "ratio", steps = 10000) (ggcurve(data = lik1[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) lik2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "l", measure = "ratio", steps = 10000) (ggcurve(data = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) We can also view the amount of agreement between the likelihood functions of these two studies. (plot_compare( data1 = lik1[[1]], data2 = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) and the consonance functions (plot_compare( data1 = curve1[[1]], data2 = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) # The Bootstrap and Consonance Functions Some authors have shown that the bootstrap distribution is equal to the confidence distribution because it meets the definition of a consonance distribution.1,18,19 The bootstrap distribution and the asymptotic consonance distribution would be defined as: $H_{n}(\theta)=1-P\left(\hat{\theta}-\hat{\theta}^{*} \leq \hat{\theta}-\theta | \mathbf{x}\right)=P\left(\hat{\theta}^{*} \leq \theta | \mathbf{x}\right)$ Certain bootstrap methods such as the BCa method and t-bootstrap method also yield second order accuracy of consonance distributions. $H_{n}(\theta)=1-P\left(\frac{\hat{\theta}^{*}-\hat{\theta}}{\widehat{S E}^{*}\left(\hat{\theta}^{*}\right)} \leq \frac{\hat{\theta}-\theta}{\widehat{S E}(\hat{\theta})} | \mathbf{x}\right)$ Here, I demonstrate how to use these particular bootstrap methods to arrive at consonance curves and densities. We’ll use the Iris dataset and construct a function that’ll yield a parameter of interest. ## The Nonparametric Bootstrap iris <- datasets::iris foo <- function(data, indices) { dt <- data[indices, ] c( cor(dt[, 1], dt[, 2], method = "p") ) } We can now use the curve_boot() method to construct a function. The default method used for this function is the “Bca” method provided by the bcaboot package.19 I will suppress the output of the function because it is unnecessarily long. But we’ve placed all the estimates into a list object called y. The first item in the list will be the consonance distribution constructed by typical means, while the third item will be the bootstrap approximation to the consonance distribution. ggcurve(data = y[[1]], nullvalue = TRUE) ggcurve(data = y[[3]], nullvalue = TRUE) We can also print out a table for TeX documents (gg <- curve_table(data = y[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -0.142 -0.093 0.048 25 0.625 0.75 0.415 -0.169 -0.067 0.102 50 0.750 0.50 1.000 -0.205 -0.031 0.174 75 0.875 0.25 2.000 -0.214 -0.021 0.194 80 0.900 0.20 2.322 -0.266 0.031 0.296 95 0.975 0.05 4.322 -0.312 0.077 0.389 99 0.995 0.01 6.644 More bootstrap replications will lead to a smoother function. But for now, we can compare these two functions to see how similar they are. plot_compare(y[[1]], y[[3]]) If we wanted to look at the bootstrap standard errors, we could do so by loading the fifth item in the list knitr::kable(y[[5]]) theta sdboot z0 a sdjack est -0.1175698 0.0755961 0.0576844 0.0304863 0.075694 jsd 0.0000000 0.0010234 0.0274023 0.0000000 0.000000 where in the top row, theta is the point estimate, and sdboot is the bootstrap estimate of the standard error, sdjack is the jacknife estimate of the standard error. z0 is the bias correction value and a is the acceleration constant. The values in the second row are essentially the internal standard errors of the estimates in the top row. The densities can also be calculated accurately using the t-bootstrap method. Here we use a different dataset to show this library(Lock5Data) dataz <- data(CommuteAtlanta) func <- function(data, index) { x <- as.numeric(unlist(data[1])) y <- as.numeric(unlist(data[2])) return(mean(x[index]) - mean(y[index])) } Our function is a simple mean difference. This time, we’ll set the method to “t” for the t-bootstrap method z <- curve_boot(data = CommuteAtlanta, func = func, method = "t", replicates = 2000, steps = 1000) #> Warning in norm.inter(t, alpha): extreme order statistics used as endpoints ggcurve(data = z[[1]], nullvalue = FALSE) ggcurve(data = z[[2]], type = "cd", nullvalue = FALSE) The consonance curve and density are nearly identical. With more bootstrap replications, they are very likely to converge. (zz <- curve_table(data = z[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -39.400 -39.075 0.325 25.0 0.625 0.750 0.415 -39.611 -38.876 0.735 50.0 0.750 0.500 1.000 -39.873 -38.608 1.265 75.0 0.875 0.250 2.000 -39.932 -38.530 1.402 80.0 0.900 0.200 2.322 -40.026 -38.456 1.570 85.0 0.925 0.150 2.737 -40.118 -38.354 1.763 90.0 0.950 0.100 3.322 -40.294 -38.174 2.120 95.0 0.975 0.050 4.322 -40.442 -38.026 2.416 97.5 0.988 0.025 5.322 -40.636 -37.806 2.830 99.0 0.995 0.010 6.644 ## The Parametric Bootstrap For the examples above, we mainly used nonparametric bootstrap methods. Here I show an example using the parametric Bca bootstrap and the results it yields. First, we’ll load our data again and set our function. data(diabetes, package = "bcaboot") X <- diabetes$x y <- scale(diabetes$y, center = TRUE, scale = FALSE) lm.model <- lm(y ~ X - 1) mu.hat <- lm.model$fitted.values sigma.hat <- stats::sd(lm.model$residuals) t0 <- summary(lm.model)$adj.r.squared y.star <- sapply(mu.hat, rnorm, n = 1000, sd = sigma.hat) tt <- apply(y.star, 1, function(y) summary(lm(y ~ X - 1))$adj.r.squared) b.star <- y.star %*% X Now, we’ll use the same function, but set the method to “bcapar” for the parametric method. df <- curve_boot(method = "bcapar", t0 = t0, tt = tt, bb = b.star) Now we can look at our outputs. ggcurve(df[[1]], nullvalue = FALSE) ggcurve(df[[3]], nullvalue = FALSE) We can compare the functions to see how well the bootstrap approximations match up plot_compare(df[[1]], df[[3]]) We can also look at the density function ggcurve(df[[5]], type = "cd", nullvalue = FALSE) That concludes our demonstration of the bootstrap method to approximate consonance functions. ## Using Profile Likelihoods For this last example, we’ll explore the curve_lik() function, which can help generate profile likelihood functions, and deviance statistics with the help of the ProfileLikelihood package. library(ProfileLikelihood) #> Loading required package: MASS We’ll use a simple example taken directly from the ProfileLikelihood documentation where we’ll calculate the likelihoods from a glm model data(dataglm) xx <- profilelike.glm(y ~ x1 + x2, data = dataglm, profile.theta = "group", family = binomial(link = "logit"), length = 500, round = 2 ) #> Warning message: provide lo.theta and hi.theta Then, we’ll use curve_lik() on the object that the ProfileLikelihood package created. lik <- curve_lik(xx, dataglm) tibble::tibble(lik[[1]]) #> # A tibble: 500 x 1 #> lik[[1]]$values$likelihood $loglikelihood$support \$deviancestat #> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 -1.41 9.26e-21 -9.79 0.0000560 9.79 #> 2 -1.40 1.00e-20 -9.71 0.0000606 9.71 #> 3 -1.39 1.08e-20 -9.63 0.0000655 9.63 #> 4 -1.38 1.17e-20 -9.56 0.0000708 9.56 #> 5 -1.37 1.26e-20 -9.48 0.0000765 9.48 #> 6 -1.35 1.37e-20 -9.40 0.0000826 9.40 #> 7 -1.34 1.47e-20 -9.32 0.0000892 9.32 #> 8 -1.33 1.59e-20 -9.25 0.0000963 9.25 #> 9 -1.32 1.72e-20 -9.17 0.000104 9.17 #> 10 -1.31 1.85e-20 -9.10 0.000112 9.10 #> # … with 490 more rows Next, we’ll plot three functions, the relative likelihood, the log-likelihood, the likelihood, and the deviance function. ggcurve(lik[[1]], type = "l1", nullvalue = TRUE) ggcurve(lik[[1]], type = "l2") ggcurve(lik[[1]], type = "l3") ggcurve(lik[[1]], type = "d")` The obvious advantage of using reduced likelihoods is that they are free of nuisance parameters $L_{t_{n}}(\theta)=f_{n}\left(F_{n}^{-1}\left(H_{p i v}(\theta)\right)\right)\left|\frac{\partial}{\partial t} \psi\left(t_{n}, \theta\right)\right|=h_{p i v}(\theta)\left|\frac{\partial}{\partial t} \psi(t, \theta)\right| /\left.\left|\frac{\partial}{\partial \theta} \psi(t, \theta)\right|\right|_{t=t_{n}}$ thus, giving summaries of the data that can be incorporated into combined analyses. # References 1. Xie M-g, Singh K. Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review. International Statistical Review. 2013;81(1):3-39. doi:10.1111/insr.12000 2. Birnbaum A. A unified theory of estimation, I. The Annals of Mathematical Statistics. 1961;32(1):112-135. doi:10.1214/aoms/1177705145 3. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. arXiv:190908579 [statME]. September 2019. http://arxiv.org/abs/1909.08579. 4. Fraser DAS. P-Values: The Insight to Modern Statistical Inference. Annual Review of Statistics and Its Application. 2017;4(1):1-14. doi:10.1146/annurev-statistics-060116-054139 5. Fraser DAS. The P-value function and statistical inference. The American Statistician. 2019;73(sup1):135-147. doi:10.1080/00031305.2018.1556735 6. Poole C. Beyond the confidence interval. American Journal of Public Health. 1987;77(2):195-199. doi:10.2105/AJPH.77.2.195 7. Poole C. Confidence intervals exclude nothing. American Journal of Public Health. 1987;77(4):492-493. doi:10.2105/ajph.77.4.492 8. Schweder T, Hjort NL. Confidence and Likelihood*. Scand J Stat. 2002;29(2):309-332. doi:10.1111/1467-9469.00285 9. Schweder T, Hjort NL. Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions. Cambridge University Press; 2016. 10. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – distribution estimator of a parameter. August 2007. http://arxiv.org/abs/0708.0976. 11. Sullivan KM, Foster DA. Use of the confidence interval function. Epidemiology. 1990;1(1):39-42. doi:10.1097/00001648-199001000-00009 12. Whitehead J. The case for frequentism in clinical trials. Statistics in Medicine. 1993;12(15-16):1405-1413. doi:10.1002/sim.4780121506 13. Rothman KJ, Greenland S, Lash TL. Precision and statistics in epidemiologic studies. In: Rothman KJ, Greenland S, Lash TL, eds. Modern Epidemiology. 3rd ed. Lippincott Williams & Wilkins; 2008:148-167. 14. Greenland S. Valid P-values behave exactly as they should: Some misleading criticisms of P-values and their resolution with S-values. The American Statistician. 2019;73(sup1):106-114. doi:10.1080/00031305.2018.1529625 15. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27(3):379-423. doi:10.1002/j.1538-7305.1948.tb01338.x 16. Brown HK, Ray JG, Wilton AS, Lunsky Y, Gomes T, Vigod SN. Association between serotonergic antidepressant use during pregnancy and autism spectrum disorder in children. JAMA. 2017;317(15):1544-1552. doi:10.1001/jama.2017.3415 17. Brown HK, Hussain-Shamsy N, Lunsky Y, Dennis C-LE, Vigod SN. The association between antenatal exposure to selective serotonin reuptake inhibitors and autism: A systematic review and meta-analysis. The Journal of Clinical Psychiatry. 2017;78(1):e48-e58. doi:10.4088/JCP.15r10194 18. Efron B, Tibshirani RJ. An Introduction to the Bootstrap. CRC Press; 1994. 19. Efron B, Narasimhan B. The automatic construction of bootstrap confidence intervals. October 2018:17.
2020-04-09 06:00:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5648199319839478, "perplexity": 6947.525005279786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00533.warc.gz"}
https://comm.support.ca.com/kb/launch-a-vse-server-as-a-windows-service-with-a-different-localproperties-using-vmoptions/kb000048429
# Launch a VSE Server as a Windows Service with a different local.properties (using .vmoptions) Document ID : KB000048429 Show Technical Document Details Question: How do you launch a VSE Server as a Windows Service with a different local.properties? 1. Create a new text file in LISA_HOME/bin called VirtualServiceEnvironmentService.vmoptions. 2. Edit the file, and add this line: -DLISA_LOCAL_PROPERTIES=C:\path\to\vse.local.properties(tweak the path accordingly) 3. Restart the VSE service. It should now be using the local.properties that you specified. You can do the same thing with any LISA executable in the bin directory. Just make an exename.vmoptions file and put the JVM options you want in the file (one option per line). The vmoptions files are used to pass additional parameters to a Java process in order to modify the default settings used for the JVM. These files can be used to customize the memory allocation settings for each of the LISA processes used in the server. Thesefiles must be located on the same folder as the actual executable scripts and must have the same name, with the exception of the extension ( .vmoptions). These files are located at LISA_HOME\bin folder. The contents can be like: -Xms256m -Xmx1024m -Xss512k Okay, this works for Internet based licenses, but I have file based licenses. How can this work for that? We use the same vmoptions file. In the above example, it is used to point to a different license via a local.properties file. This works for an Internet based license, but not a file based license because by DEFAULT the lisalic.xml file is in LISA_HOME. There is a property, lisa.license, that can be changed to allow for multiple file based licenses in LISA_HOME. The lisa.license property contains the fully qualified path to the license file and defaults to the lisalic.xml in the LISA_HOME folder. How can we use this to our advantage?
2018-11-21 04:36:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4245600998401642, "perplexity": 2573.7023966079623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00264.warc.gz"}
http://mscroggs.co.uk/puzzles/tags/geometry
mscroggs.co.uk mscroggs.co.uk subscribe # Puzzles ## 23 December Today's number is the area of the largest area rectangle with perimeter 46 and whose sides are all integer length. ## 12 December These three vertices form a right angled triangle. There are 2600 different ways to pick three vertices of a regular 26-sided shape. Sometime the three vertices you pick form a right angled triangle. Today's number is the number of different ways to pick three vertices of a regular 26-sided shape so that the three vertices make a right angled triangle. ## Equal lengths The picture below shows two copies of the same rectangle with red and blue lines. The blue line visits the midpoint of the opposite side. The lengths shown in red and blue are of equal length. What is the ratio of the sides of the rectangle? ## Is it equilateral? In the diagram below, $$ABDC$$ is a square. Angles $$ACE$$ and $$BDE$$ are both 75°. Is triangle $$ABE$$ equilateral? Why/why not? ## Bending a straw Two points along a drinking straw are picked at random. The straw is then bent at these points. What is the probability that the two ends meet up to make a triangle? ## Placing plates Two players take turns placing identical plates on a square table. The player who is first to be unable to place a plate loses. Which player wins? ## 20 December Earlier this year, I wrote a blog post about different ways to prove Pythagoras' theorem. Today's puzzle uses Pythagoras' theorem. Start with a line of length 2. Draw a line of length 17 perpendicular to it. Connect the ends to make a right-angled triangle. The length of the hypotenuse of this triangle will be a non-integer. Draw a line of length 17 perpendicular to the hypotenuse and make another right-angled triangle. Again the new hypotenuse will have a non-integer length. Repeat this until you get a hypotenuse of integer length. What is the length of this hypotenuse? ## 17 December The number of degrees in one internal angle of a regular polygon with 360 sides. ## Archive Show me a random puzzle ▼ show ▼
2019-09-18 22:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5971218347549438, "perplexity": 428.51672866812993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00339.warc.gz"}
http://mathhelpforum.com/advanced-statistics/155832-cumulative-distribution-function-using-gamma-distribution.html
## cumulative distribution function using a gamma distribution Hi there, I am trying to fit some data with a survival function, which is just 1 - cdf (cumulative distribution function). I was able to fit the data assuming a normal distributed random variable: Due to the nature of the experiment I suspect that a gamma distributed random variable would give a better fit (at the end of the step), since the gamma distribution is "like a normal distribution with a bias on one side". However, I cannot get this to work. It always looks like a normal distributed random variable, i.e. just as in the figure above. I use Python. This is how I define my function: def scaled_sf_gamma(x, c, d, shape_param): return c*stats.gamma.sf(x, shape_param) + d Parameters c and d scale the survival function. Then I define an additional shape parameter. I optimize the curve fit as: p_opt_gamma = sp.optimize.curve_fit(scaled_sf_gamma, new_time, CH4_interpolated)[0] As I said the result looks like a normal distribution. I think I am missing to optimize another parameter, which "skews" the normal distribution. Here is some information about the gamma distribution and how it is used scipy.stats.gamma &mdash; SciPy v0.9.dev6665 Reference Guide (DRAFT). I do not understand the "lower or upper tail probability" which is given as a non-optional argument here. Maybe there lies the key... Thanks a lot in advance for any help. Cheers Frank
2014-07-31 01:19:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066100478172302, "perplexity": 584.2260729939748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00101-ip-10-146-231-18.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/67245/confusion-matrix-of-random-forest-doesnot-match-predicted-probabilities-on-train
# Confusion matrix of random Forest doesnot match predicted probabilities on train data Based on an earlier question I balanced the classes such that the numbers in both classes is about similar. The random Forest gives next result: > print(rFresult) Call: randomForest(formula = finresfh ~ ., data = rFdatasubset, importance = TRUE) Type of random forest: classification Number of trees: 500 No. of variables tried at each split: 14 OOB estimate of error rate: 35.53% Confusion matrix: 1 2 class.error 1 1852 627 0.2529246 2 1022 1140 0.4727105 Prediction on the train set shows perfect separation in contrast to the confusion matrix: > tab <- table(probability=round(predict(rFresult, newdata=rFdatasubset, type="prob")[,2],1), TRUE_status=rFdatasubset$finresfh) > tab TRUE_status probability 1 2 0.1 978 0 0.2 1447 0 0.3 54 0 0.7 0 65 0.8 0 1551 0.9 0 543 1 0 3 The probability is estimated for the subjects to be in class 2. The "probability" table means the number of subjects with predicted probability level having a certain TRUE status. Can anyone explain why the estimated probabilities show a perfect separation but a totally different result in the confusion table? ## 1 Answer You're trying to get predictions on your training dataset. This is misleading, as the component trees in the RF have been obtained by optimising the fit criterion on this data. You need to omit the newdata argument, which will get you the out-of-bag predictions instead. table(probability=round(predict(rFresult, type="prob")[,2], 1), TRUE_status=rFdatasubset$finresfh) • great, this is the solution – Hans Aug 14 '13 at 6:26
2021-10-25 20:43:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184850096702576, "perplexity": 8347.310580960348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00442.warc.gz"}
http://zenzike.com/posts/euler/2010-08-12-euler-7
# Euler #7 : Benchmarked Primes by Nicolas Wu Posted on 12 August 2010 This week’s Project Euler question is: Find the 10001st prime. We already described a simple algorithm for finding primes in a previous post, so rather than repeat ourselves, in this article we’ll discuss benchmarking using Criterion to find the fastest prime number algorithm that doesn’t require too much magic. I’ll be taking implementations found on the Haskell wiki. ## Imported modules First we’ll need to import the Criterion modules, which provides us with our benchmarking suite: > import System.Environment (getArgs, withArgs) > import Criterion (bgroup, bench, nf) > import Progression.Main (defaultMain) > import Data.List.Ordered (minus, union) I’ll actually be using Progression in conjuction with Criterion, which just makes collecting the results of several benchmarks a little easier. ## Prime Algorithms The prime number generator we used previously was Turner’s sieve, defined as follows: > turner :: [Int] > turner = sieve [2 .. ] > where > sieve (p:xs) = p : sieve [x | x <- xs, x mod p /= 0] The Haskell wiki documents a whole range of other algorithms that can be used to generate primes. Here are the definitions that I pulled from the wiki: > postSieve :: [Int] > postSieve = 2 : 3 : sieve (tail postSieve) [5,7..] > where > sieve (p:ps) xs = h ++ sieve ps [x | x <- t, x rem p /= 0] > where (h,~(_:t)) = span (< p*p) xs > > trialOdds :: [Int] > trialOdds = 2 : 3 : filter isPrime [5,7..] > where > isPrime n = all (notDivs n) > $takeWhile (\p-> p*p <= n) (tail trialOdds) > notDivs n p = n mod p /= 0 > > nestedFilters :: [Int] > nestedFilters = 2 : 3 : sieve [] (tail nestedFilters) 5 > where > notDivsBy d n = n mod d /= 0 > sieve ds (p:ps) x = foldr (filter . notDivsBy) [x,x+2..p*p-2] ds > ++ sieve (p:ds) ps (p*p+2) > > spansPrimes :: [Int] > spansPrimes = 2 : 3 : sieve 0 (tail spansPrimes) 5 > where > sieve k (p:ps) x = [n | n <- [x,x+2..p*p-2], and [nremp/=0 | p <- fs]] > ++ sieve (k+1) ps (p*p+2) > where fs = take k (tail spansPrimes) > > bird :: [Int] > bird = 2 : primes' > where > primes' = [3] ++ [5,7..] minus foldr union' [] mults > mults = map (\p -> let q=p*p in (q,tail [q,q+2*p..]))$ primes' > union' (q,qs) xs = q : union qs xs > > wheel :: [Int] > wheel = 2:3:primes' > where > 1:p:candidates = [6*k+r | k <- [0..], r <- [1,5]] > primes' = p : filter isPrime candidates > isPrime n = all (not . divides n) > $takeWhile (\p -> p*p <= n) primes' > divides n p = n mod p == 0 I won’t go into the details of explaining these different algorithms, since I want us to focus on how we might benchmark these implementations. ## Benchmarking In order to compare these different algorithms, we construct a program that takes as its argument the name of the function that should be used to produce prime numbers. Once the user has provided this input, the benchmark is executed using Criterion to produce the first 101, 1001, and 10001 primes. > main = do > args <- getArgs > let !primes = case head args of > "turner" -> turner > "postSieve" -> postSieve > "trialOdds" -> trialOdds > "nestedFilters" -> nestedFilters > "spansPrimes" -> spansPrimes > "bird" -> bird > "wheel" -> wheel > _ -> error "prime function unkown!" > withArgs (("-n" ++ (head args)) : tail args)$ do > defaultMain . bgroup "Primes" $> [ bench "101"$ nf (\n -> primes !! n) 101 > , bench "1001" $nf (\n -> primes !! n) 1001 > , bench "10001"$ nf (\n -> primes !! n) 10001 > ] We then run this code with each prime function name as an argument individually, and the Progression library puts the results together. Here’s a bar chart generated from the data: These results have been normalised against the turner function, and show the results of how long it took for the various algorithms to find the 10001th, 1001th and 100th primes. Solving this week’s problem is a simple case of running any one of these algorithms on our magic number: > euler7 = spansPrimes 10001 ## Summary Collecting benchmark information with Criterion and Progression is really quite simple! The best thing about Criterion is that the benchmarking is very robust: detailed statistics are returned regarding the benchmarking process, and whether the results are likely to be accurate. Progression makes the collation of several runs of benchmarks very simple, and means that different versions of a program can be compared with ease.
2017-04-24 13:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749905526638031, "perplexity": 6924.305193125778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00245-ip-10-145-167-34.ec2.internal.warc.gz"}
http://zbmath.org/?q=an:1212.35348
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) On the 3D viscous primitive equations of the large-scale atmosphere. (English) Zbl 1212.35348 Summary: This paper is devoted to considering the three-dimensional viscous primitive equations of the large-scale atmosphere. First, we prove the global well-posedness for the primitive equations with weaker initial data than that in the paper by D. Huang and B. Guo [Sci. China, Ser. D 51, No. 3, 469–480 (2008)]. Second, we obtain the existence of smooth solutions to the equations. Moreover, we obtain the compact global attractor in $V$ for the dynamical system generated by the primitive equations of large-scale atmosphere, which improves the result of D. Huang and B. Guo (loc. cit.). ##### MSC: 35Q30 Stokes and Navier-Stokes equations 65M70 Spectral, collocation and related methods (IVP of PDE) 86A10 Meteorology and atmospheric physics
2014-04-20 18:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629653811454773, "perplexity": 7528.983201005069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/define-or-explain-following-concept-direct-demand-demand_78492
Share # Define Or Explain the Following Concept: Direct Demand - Economics ConceptDemand #### Question Define or explain the following concept: Direct demand #### Solution Demand for the goods that are purchased for direct consumption and are not used as intermediate goods is referred to as direct demand. For instance, goods like clothes and food have a direct demand, as they are meant for final consumption. The demand for such goods does not depend on the demand for any other commodity. Is there an error in this question or solution? #### APPEARS IN Micheal Vaz Solution for Micheal Vaz Class 12 Economics (2019 to Current) Chapter 3: Demand Analysis Exercise | Q: 1.4 | Page no. 24 #### Video TutorialsVIEW ALL [3] Solution Define Or Explain the Following Concept: Direct Demand Concept: Demand. S
2020-03-28 18:27:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348608613014221, "perplexity": 5846.327564840934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00240.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-12-maths-chapter-19-indefinite-integrals-exercise-19-5/
# RD Sharma Solutions For Class 12 Maths Exercise 19.5 Chapter 19 Indefinite Integrals This exercise deals with evaluation of integrals of the form $\int (ax+b)\sqrt{cx+d} dx$ and $\int \frac{ax+b}{\sqrt{cx+d}} \ dx$. Experts at BYJU’S have formulated the RD Sharma Class 12 Solutions for Maths in the most lucid and easy manner. Solutions are developed using shortcut techniques to help students grasp the concepts faster and to make learning fun. The solutions to this exercise are available in the pdf format, which can be downloaded easily from the links provided below. To clear their doubts students can refer to RD Sharma Solutions for Class 12 Maths Chapter 19 Exercise 19.5. Solution: Given Solution: Solution: Solution: Solution:
2020-07-13 12:16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5996934175491333, "perplexity": 1087.9267936265205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00556.warc.gz"}
https://nhigham.com/2017/01/30/good-times-in-matlab/?shared=email&msg=fail
# Good Times in MATLAB: How to Typeset the Multiplication Symbol The MATLAB output >> A = rand(2); whos Name Size Bytes Class Attributes A 2x2 32 double will be familiar to seasoned users. Consider this, however, from MATLAB R2016b: >> s = string({'One','Two'}) s = 1×2 string array "One" "Two" At first sight, you might not spot anything unusual, other than the new string datatype. But there are two differences. First, MATLAB prints a header giving the type and size of the array. It does so for arrays of type other than double precision and char. Second, the times symbol is no longer an “x” but is now a multiplication symbol: “×”. The new “times” certainly looks better. There are still remnants of “x”, for example in whos s for the example above, but I presume that all occurrences of “x” will be changed to the new symbol in the next release However, there is a catch: the “×” symbol is a Unicode character, so it will not print correctly when you include the output in LaTeX (at least with the version provided in TeX Live 2016). Moreover, it may not even save correctly if your editor is not set up for Unicode characters. Here is how we dealt with the problem in the third edition (published in January 2017) of MATLAB Guide. We put the code \usepackage[utf8x]{inputenc} \DeclareUnicodeCharacter{0215}{\ensuremath{\times}} in the preamble of the master TeX file, do.tex. We also told our editor, Emacs, to use a UTF-8 coding, by putting the following code at the end of each included .tex file (we have one file per chapter): %%% Local Variables: %%% coding: utf-8 %%% mode: latex %%% TeX-master: "do" %%% End: With this setup we can cut and paste output including “×” into our .tex files and it appears as expected in the LaTeX output.
2021-09-22 01:34:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729809522628784, "perplexity": 1859.6125031524075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00527.warc.gz"}
https://cstheory.stackexchange.com/questions/8169/computational-complexity-of-random-sampling?noredirect=1
# Computational complexity of random sampling I am using some randomized algorithms (particle filters) and I would like to know what is the computational complexity of obtaining one random sample of a continuous distribution (for instance from a multivariate Gaussian), in terms of elemental operations... or what computational complexities have conventional algorithms. Thank you • It depends on your computational model. Sometimes people just assume you can generate a Gaussian as a unit operation. However, if all you can generate is, say, random bits, and you want an approximate Gaussian, the complexity depends on the approximation you want. – Dana Moshkovitz Sep 10 '11 at 11:26 • @DanaMoshkovitz: maybe this could be an answer ? – Suresh Venkat Sep 10 '11 at 20:28 • Ok, I posted it as an answer. – Dana Moshkovitz Sep 10 '11 at 20:31 • FYI In the case of a finite distribution (not what op asks!), $O(1)$ time is (in theory) possible. See cstheory.stackexchange.com/questions/37648/…. – Neal Young Aug 22 '18 at 12:18 • A more precise question would be, if you want to sample a distribution from random iid bits that is $\epsilon$-close to a Gaussian in total variational distance, what is the running time dependence of the sampler in $\epsilon$. – Mahdi Cheraghchi Sep 11 '11 at 0:28
2020-11-24 20:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7139304280281067, "perplexity": 642.0394701565398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141177566.10/warc/CC-MAIN-20201124195123-20201124225123-00248.warc.gz"}
http://mathhelpforum.com/algebra/112410-recursion-patterns-arithmetic-geometric-equations-sigma-notations.html
Thread: recursion patterns, arithmetic/geometric equations, and sigma notations 1. recursion patterns, arithmetic/geometric equations, and sigma notations Could a kind soul please be so kind to explain what exactly the variables in the equations of arithmetic/geometric sequences mean? I would understand it so much more if i knew what u (sub N) or a meant. I have no notes Furthermore I do not quite grasp the concept of the sigma notation for my teacher is insane. Feel free to delve into that Thank you 2. Originally Posted by sodumb:( Could a kind soul please be so kind to explain what exactly the variables in the equations of arithmetic/geometric sequences mean? I would understand it so much more if i knew what u (sub N) or a meant. I have no notes Furthermore I do not quite grasp the concept of the sigma notation for my teacher is insane. Feel free to delve into that Thank you U_n = nth term of a sequence U_1 = a = first terms of a sequence n = number of terms in a sequence or a specific term Arithmetic Sequences d = common difference ( $U_n - U_{n-1} = U_{n-1}-U_{n-2} =d$) nth term: $U_n = a+(n-1)d$ Sum of n terms: $S_n=\frac{n}{2}(2a + (n-1)d)$ Geometric Sequence r = common ratio ( $\frac{U_n}{U_{n-1}} = \frac{U_{n-1}}{U_{n-2}} = r$) nth term: $U_n = ar^{n-1}$ sum to n terms ( $|r| \geq 1$): $S_n = \frac{a(1-r^n)}{r^n} = \frac{a(r^n-1)}{r^n}$ Sum to infinity ( $|r| < 1$): $S_{\infty} = \frac{a}{1-r}$
2017-02-28 15:13:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6766087412834167, "perplexity": 1008.5578513457846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00210-ip-10-171-10-108.ec2.internal.warc.gz"}
https://cyclostationary.blog/category/radio-frequency-scene-analysis/
## PSK/QAM Cochannel Data Set for Modulation Recognition Researchers [CSPB.ML.2023] The next step in dataset complexity at the CSP Blog: cochannel signals. I’ve developed another data set for use in assessing modulation-recognition algorithms (machine-learning-based or otherwise) that is more complex than the original sets I posted for the ML Challenge (CSPB.ML.2018 and CSPB.ML.2022). Half of the new dataset consists of one signal in noise and the other half consists of two signals in noise. In most cases the two signals overlap spectrally, which is a signal condition called cochannel interference. We’ll call it CSPB.ML.2023. Continue reading “PSK/QAM Cochannel Data Set for Modulation Recognition Researchers [CSPB.ML.2023]” ## Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well Neural networks with CSP-feature inputs DO generalize in the modulation-recognition problem setting. In some recently published papers (My Papers [50,51]), my ODU colleagues and I showed that convolutional neural networks and capsule networks do not generalize well when their inputs are complex-valued data samples, commonly referred to as simply IQ samples, or as raw IQ samples by machine learners.(Unclear why the adjective ‘raw’ is often used as it adds nothing to the meaning. If I just say Hey, pass me those IQ samples, would ya?, do you think maybe he means the processed ones? How about raw-I-mean–seriously-man–I-did-not-touch-those-numbers-OK? IQ samples? All-natural vegan unprocessed no-GMO organic IQ samples? Uncooked IQ samples?) Moreover, the capsule networks typically outperform the convolutional networks. In a new paper (MILCOM 2022: My Papers [52]; arxiv.org version), my colleagues and I continue this line of research by including cyclic cumulants as the inputs to convolutional and capsule networks. We find that capsule networks outperform convolutional networks and that convolutional networks trained on cyclic cumulants outperform convolutional networks trained on IQ samples. We also find that both convolutional and capsule networks trained on cyclic cumulants generalize perfectly well between datasets that have different (disjoint) probability density functions governing their carrier frequency offset parameters. That is, convolutional networks do better recognition with cyclic cumulants and generalize very well with cyclic cumulants. So why don’t neural networks ever ‘learn’ cyclic cumulants with IQ data at the input? The majority of the software and analysis work is performed by the first author, John Snoap, with an assist on capsule networks by James Latshaw. I created the datasets we used (available here on the CSP Blog [see below]) and helped with the blind parameter estimation. Professor Popescu guided us all and contributed substantially to the writing. Continue reading “Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well” ## What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178] Starts as a personal gripe, but ends with weird stuff from the literature. During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.” Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look. Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]” ## One Last Time … We take a quick look at a fourth DeepSig dataset called 2016.04C.multisnr.tar.bz2 in the context of the data-shift problem in machine learning. And if we get this right, We’re gonna teach ’em how to say Goodbye … You and I. Lin-Manuel Miranda, “One Last Time,” Hamilton I didn’t expect to have to do this, but I am going to analyze yet another DeepSig dataset. One last time. This one is called 2016.04C.multisnr.tar.bz2, and is described thusly on the DeepSig website: I’ve analyzed the 2018 dataset here, the RML2016.10b.tar.bz2 dataset here, and the RML2016.10a.tar.bz2 dataset here. Now I’ve come across a manuscript-in-review in which both the RML2016.10a and RML2016.04c data sets are used. The idea is that these two datasets represent two sufficiently distinct datasets so that they are good candidates for use in a data-shift study involving trained neural-network modulation-recognition systems. The data-shift problem is, as one researcher puts it: Data shift or data drift, concept shift, changing environments, data fractures are all similar terms that describe the same phenomenon: the different distribution of data between train and test sets Georgios Sarantitis But … are they really all that different? Continue reading “One Last Time …” ## J. Antoni’s Fast Spectral Correlation Estimator The Fast Spectral Correlation estimator is a quick way to find small cycle frequencies. However, its restrictions render it inferior to estimators like the SSCA and FAM. In this post we take a look at an alternative CSP estimator created by J. Antoni et al (The Literature [R152]). The paper describing the estimator can be found here, and you can get some corresponding MATLAB code, posted by the authors, here if you have a Mathworks account. Continue reading “J. Antoni’s Fast Spectral Correlation Estimator” ## Cyclostationarity of DMR Signals Let’s take a brief look at the cyclostationarity of a captured DMR signal. It’s more complicated than one might think. In this post I look at the cyclostationarity of a digital mobile radio (DMR) signal empirically. That is, I have a captured DMR signal from sigidwiki.com, and I apply blind CSP to it to determine its cycle frequencies and spectral correlation function. The signal is arranged in frames or slots, with gaps between successive slots, so there is the chance that we’ll see cyclostationarity due to the on-burst (or on-frame) signaling and cyclostationarity due to the framing itself. Continue reading “Cyclostationarity of DMR Signals” ## Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al Another post-publication review of a paper that is weak on the ‘RF’ in RF machine learning. Let’s take a look at a recently published paper (The Literature [R148]) on machine-learning-based modulation-recognition to get a data point on how some electrical engineers–these are more on the side of computer science I believe–use mathematics when they turn to radio-frequency problems. You can guess it isn’t pretty, and that I’m not here to exalt their acumen. Continue reading “Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al” ## More on DeepSig’s RML Data Sets The second DeepSig data set I analyze: SNR problems and strange PSDs. I presented an analysis of one of DeepSig’s earlier modulation-recognition data sets (RML2016.10a.tar.bz2) in the post on All BPSK Signals. There we saw several flaws in the data set as well as curiosities. Most notably, the signals in the data set labeled as analog amplitude-modulated single sideband (AM-SSB) were absent: these signals were only noise. DeepSig has several other data sets on offer at the time of this writing: In this post, I’ll present a few thoughts and results for the “Larger Version” of RML2016.10a.tar.bz2, which is called RML2016.10b.tar.bz2. This is a good post to offer because it is coherent with the first RML post, but also because more papers are being published that use the RML 10b data set, and of course more such papers are in review. Maybe the offered analysis here will help reviewers to better understand and critique the machine-learning papers. The latter do not ever contain any side analysis or validation of the RML data sets (let me know if you find one that does in the Comments below), so we can’t rely on the machine learners to assess their inputs. (Update: I analyze a third DeepSig data set here. And a fourth and final one here.) Continue reading “More on DeepSig’s RML Data Sets” ## All BPSK Signals An analysis of DeepSig’s 2016.10A data set, used in many published machine-learning papers, and detailed comments on quite a few of those papers. Update March 2021 See my analyses of three other DeepSig datasets here, here, and here. Update June 2020 I’ll be adding new papers to this post as I find them. At the end of the original post there is a sequence of date-labeled updates that briefly describe the relevant aspects of the newly found papers. Some machine-learning modulation-recognition papers deserve their own post, so check back at the CSP Blog from time-to-time for “Comments On …” posts. ## A Gallery of Cyclic Correlations There are some situations in which the spectral correlation function is not the preferred measure of (second-order) cyclostationarity. In these situations, the cyclic autocorrelation (non-conjugate and conjugate versions) may be much simpler to estimate and work with in terms of detector, classifier, and estimator structures. So in this post, I’m going to provide surface plots of the cyclic autocorrelation for each of the signals in the spectral correlation gallery post. The exceptions are those signals I called feature-rich in the spectral correlation gallery post, such as DSSS, LTE, and radar. Recall that such signals possess a large number of cycle frequencies, and plotting their three-dimensional spectral correlation surface is not helpful as it is difficult to interpret with the human eye. So for the cycle-frequency patterns of feature-rich signals, we’ll rely on the stem-style (cyclic-domain profile) plots that I used in the spectral correlation gallery post. ## Data Set for the Machine-Learning Challenge [CSPB.ML.2018] A PSK/QAM/SQPSK data set with randomized symbol rate, inband SNR, carrier-frequency offset, and pulse roll-off. Update February 2023: I’ve posted a third challenge dataset here. It is CSPB.ML.2023 and features cochannel signals. Update April 2022. I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution. If you refer to either of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish: Original ML Challenge Dataset: CSPB.ML.2018. Shifted ML Challenge Dataset: CSPB.ML.2022. Update September 2020. I made a mistake when I created the signal-parameter “truth” files signal_record.txt and signal_record_first_20000.txt. Like the DeepSig RML data sets that I analyzed on the CSP Blog here and here, the SNR parameter in the truth files did not match the actual SNR of the signals in the data files. I’ve updated the truth files and the links below. You can still use the original files for all other signal parameters, but the SNR parameter was in error. Update July 2020. I originally posted $20,000$ signals in the posted data set. I’ve now added another $92,000$ for a total of $112,000$ signals. The original signals are contained in Batches 1-5, the additional signals in Batches 6-28. I’ve placed these additional Batches at the end of the post to preserve the original post’s content. Continue reading “Data Set for the Machine-Learning Challenge [CSPB.ML.2018]” ## A Challenge for the Machine Learners The machine-learning modulation-recognition community consistently claims vastly superior performance to anything that has come before. Let’s test that. Update February 2023: A third dataset has been posted here. This new dataset, CSPB.ML.2023, features cochannel signals. Update April 2022: I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution. If you refer to any of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish: Original ML Challenge Dataset: CSPB.ML.2018. Shifted ML Challenge Dataset: CSPB.ML.2022. Cochannel ML Dataset: CSPB.ML.2023. ### Update February 2019 I’ve decided to post the data set I discuss here to the CSP Blog for all interested parties to use. See the new post on the Data Set. If you do use it, please let me and the CSP Blog readers know how you fared with your experiments in the Comments section of either post. Thanks! ## CSP Estimators: The FFT Accumulation Method An alternative to the strip spectral correlation analyzer. Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper $\displaystyle (f, \alpha)$ values. ## ‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al. Reconsidering my first attempt at teaching a machine the Fourier transform with the help of a CSP Blog reader. Also, the Fourier transform is viewed by Machine Learners as an input data representation, and that representation matters. I first considered whether a machine (neural network) could learn the (64-point, complex-valued)  Fourier transform in this post. I used MATLAB’s Neural Network Toolbox and I failed to get good learning results because I did not properly set the machine’s hyperparameters. A kind reader named Vito Dantona provided a comment to that original post that contained good hyperparameter selections, and I’m going to report the new results here in this post. Since the Fourier transform is linear, the machine should be set up to do linear processing. It can’t just figure that out for itself. Once I used Vito’s suggested hyperparameters to force the machine to be linear, the results became much better: ## CSP Patent: Tunneling Tunneling == Purposeful severe undersampling of wideband communication signals. If some of the cyclostationarity property remains, we can exploit it at a lower cost. My colleague Dr. Apurva Mody (of BAE Systems, AiRANACULUS, IEEE 802.22, and the WhiteSpace Alliance) and I have received a patent on a CSP-related invention we call tunneling. The US Patent is 9,755,869 and you can read it here or download it here. We’ve got a journal paper in review and a 2013 MILCOM conference paper (My Papers [38]) that discuss and illustrate the involved ideas. I’m also working on a CSP Blog post on the topic. Update December 28, 2017: Our Tunneling journal paper has been accepted for publication in the journal IEEE Transactions on Cognitive Communications and Networking. You can download the pre-publication version here. ## CSP Estimators: Cyclic Temporal Moments and Cumulants How do we efficiently estimate higher-order cyclic cumulants? The basic answer is first estimate cyclic moments, then combine using the moments-to-cumulants formula. In this post we discuss ways of estimating $n$-th order cyclic temporal moment and cumulant functions. Recall that for $n=2$, cyclic moments and cyclic cumulants are usually identical. They differ when the signal contains one or more finite-strength additive sine-wave components. In the common case when such components are absent (as in our recurring numerical example involving rectangular-pulse BPSK), they are equal and they are also equal to the conventional cyclic autocorrelation function provided the delay vector is chosen appropriately. That is, the two-dimensional delay vector $\boldsymbol{\tau} = [\tau_1\ \ \tau_2]$ is set equal to $[\tau/2\ \ -\tau/2]$. The more interesting case is when the order $n$ is greater than two. Most communication signal models possess odd-order moments and cumulants that are identically zero, so the first non-trivial order $n$ greater than two is four. Our estimation task is to estimate $n$-th order temporal moment and cumulant functions for $n \ge 4$ using a sampled-data record of length $T$. ## Automatic Spectral Segmentation Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists. In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP. It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal. The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing. ## Cyclostationarity of Direct-Sequence Spread-Spectrum Signals Spread-spectrum signals are used to enable shared-bandwidth communication systems (CDMA), precision position estimation (GPS), and secure wireless data transmission. In this post we look at direct-sequence spread-spectrum (DSSS) signals, which can be usefully modeled as a kind of PSK signal. DSSS signals are used in a variety of real-world situations, including the familiar CDMA and WCDMA signals, covert signaling, and GPS. My colleague Antonio Napolitano has done some work on a large class of DSSS signals (The Literature [R11, R17, R95]), resulting in formulas for their spectral correlation functions, and I’ve made some remarks about their cyclostationary properties myself here and there (My Papers [16]). A good thing, from the point of view of modulation recognition, about DSSS signals is that they are easily distinguished from other PSK and QAM signals by their spectral correlation functions. Whereas most PSK/QAM signals have only a single non-conjugate cycle frequency, and no conjugate cycle frequencies, DSSS signals have many non-conjugate cycle frequencies and in some cases also have many conjugate cycle frequencies.
2023-02-07 11:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45300474762916565, "perplexity": 2341.131094820501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00817.warc.gz"}
http://mathhelpforum.com/algebra/86926-help-sums-integers-please.html
1. ## Help with sums of integers please. Question is: i) Find the sum of the integers from 29 to 107 inclusive. ii) Hence find the value of 107 E (4 + 3i) i = 29 Sorry I don't know how to put in the sigma notation above. I can follow what to do until it gets to the part in brackets (4 + 3i), I am not sure why it is that value? It doesn't seem to fit in with the rest of the equation when I have calculated it. 2. $\displaystyle \sum\limits_{k = 1}^{107} {\left( {4 + 3k} \right)} = \sum\limits_{k = 1}^{107} {\left( 4 \right)} + 3\sum\limits_{k = 1}^{107} {\left( k \right)} = (107)(4) + 3\frac{{\left( {107} \right)\left( {108} \right)}} {2}$ Here is the general idea: $\displaystyle \sum\limits_{k = 29}^{107} {\left( k \right)} = \sum\limits_{k = 1}^{107} {\left( k \right)} - \sum\limits_{k = 1}^{28} {\left( k \right)}$
2018-06-24 17:21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540731430053711, "perplexity": 278.6174755353776}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00314.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=60148&p=228976
## G(not) and G $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ Johnathan Smith 1D Posts: 108 Joined: Wed Sep 11, 2019 12:16 am ### G(not) and G What is the difference between G(not) and G? Tracy Tolentino_2E Posts: 140 Joined: Sat Sep 07, 2019 12:17 am ### Re: G(not) and G G(not) is the standard Gibbs Free energy. So it's the energy in standard conditions. (1 M, 1atm, 25 degrees Celsius) G is Gibbs Free energy in other conditions ASetlur_1G Posts: 101 Joined: Fri Aug 09, 2019 12:17 am ### Re: G(not) and G Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? KarineKim2L Posts: 100 Joined: Fri Aug 30, 2019 12:16 am ### Re: G(not) and G In addition to all of the above, the relationship between G not and G can be seen in the equation G not= G+RTlnQ. Rafsan Rana 1A Posts: 55 Joined: Sat Aug 24, 2019 12:16 am ### Re: G(not) and G Isn't the equation G = Gnot + RTlnQ ? Jainam Shah 4I Posts: 130 Joined: Fri Aug 30, 2019 12:16 am ### Re: G(not) and G G(not) is at standard conditions whereas G itself doesn't have to be at standard conditions. Sean Tran 2K Posts: 65 Joined: Sat Aug 17, 2019 12:17 am ### Re: G(not) and G G(not) represents standard Gibbs Free energy. Leyna Dang 2H Posts: 104 Joined: Thu Jul 25, 2019 12:17 am ### Re: G(not) and G G(not) is at standard Gibbs free energy, thus it is under standard conditions, unlike G. Sanjana K - 2F Posts: 102 Joined: Sat Sep 07, 2019 12:17 am Been upvoted: 1 time ### Re: G(not) and G Rafsan Rana 1A wrote:Isn't the equation G = Gnot + RTlnQ ? Yes, it should be delta G = delta G(naught) + RTlnQ. Maya Beal Dis 1D Posts: 100 Joined: Sat Aug 17, 2019 12:16 am Been upvoted: 2 times ### Re: G(not) and G In problem 5G.13 you calculate the delta G of the reaction at equilibrium and then use whether that value is positive or negative to see which way the reaction will proceed (towards reactants or products), but if the reaction is at equilibrium doesn't that mean the reaction is going both ways at the exact same rate and would therefore favor neither direction? KaleenaJezycki_1I Posts: 127 Joined: Sat Aug 17, 2019 12:18 am Been upvoted: 2 times ### Re: G(not) and G ASetlur_1G wrote:Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? Yes for the most part. BCaballero_4F Posts: 94 Joined: Wed Nov 14, 2018 12:22 am ### Re: G(not) and G ASetlur_1G wrote:Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? Yes this is essentially correct to say 205405339 Posts: 77 Joined: Thu Jul 11, 2019 12:16 am ### Re: G(not) and G G9not) is under standard conditions whereas G is not and G(n0t) will contribute to the value of G Nathan Rothschild_2D Posts: 131 Joined: Fri Aug 02, 2019 12:15 am ### Re: G(not) and G Naught always means 1M of solution or 1 atm at 298K (same as 25 Celsius) Zoe Gleason 4F Posts: 51 Joined: Tue Jul 23, 2019 12:15 am ### Re: G(not) and G Gnaught will be under standard conditions, which are 1.0M, 1atm, and 25C.
2020-07-04 06:55:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.623700737953186, "perplexity": 11561.946520820176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00545.warc.gz"}
https://math.stackexchange.com/questions/2681358/spectral-families-of-commuting-operators
# Spectral families of commuting operators Consider two self-adjoint bounded operators $A$ and $B$ on a separable Hilbert space. According to the spectral theorem we can write $$A=\int_{-\infty}^{\infty} x d E^{A}_x, \quad B=\int_{-\infty}^{\infty} y d E^{A}_y$$ where $E^{A}_x$ and $E^{B}_y$ are the spectral families of projectors of $A$ and $B$ respectively. Is there a simple way to prove that if $[A,B]=AB-BA=0$, then $[E^{A}_x,E^{B}_y]=0$ for all $x,y$? From $AB=BA$, you get $A^nB=BA^n$ for all $n$, and immediately $p(A)B=Bp(A)$ for any polynomial $p$. By Stone-Weierstrass, $f(A)B=Bf(A)$ for any $f\in C(\sigma(A))$. Now let $$\Sigma=\{\Delta:\ \Delta\ \text{ is Borel and } E^A(\Delta)B=BE^A(\Delta)\}.$$ From the fact that $E^A$ is a spectral measure, it is quickly deduced that $\Sigma$ is a $\sigma$-algebra. If $V\subset\sigma(A)$ is any open set, it may be written as a disjoint union of intervals, which allows us to see that there exists a sequence $\{f_n\}\subset C(\sigma(A))$ such that $f_n\nearrow 1_V$ pointwise. Then, for any $x\in H$, \begin{align} \langle BE^A(V)x,x\rangle &=\langle E^A(V)x,B^*x\rangle =\int_{\sigma(A)}1_V\,d E^A_{x,B^*x}\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{x,B^*x} =\lim_n\langle f_n(A)x,B^*x\rangle\\ \ \\ &=\lim_n\langle Bf_n(A)x,x\rangle=\lim_n\langle f_n(A)Bx,x\rangle\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{Bx,x} =\int_{\sigma(A)}1_V\,d E^A_{Bx,x}\\ \ \\ &=\langle E^A(V)Bx,x\rangle. \end{align} As $x$ was arbitrary, $E^A(V)B=BE^A(V)$. So $V\in\Sigma$, and thus $\Sigma$ contains all open subsets of $\sigma(A)$, and then the whole Borel $\sigma$-algebra of $\sigma(A)$. Thus $E^A(\Delta)B=BE^A(\Delta)$ for any Borel $\Delta\subset\sigma(A)$. So far we haven't even used that $B$ is selfadjoint; but now we can use the fact to repeat the above argument for a fixed $\Delta_1\subset\sigma(A)$, to obtain $$E^A(\Delta_1)E^B(\Delta_2)=E^B(\Delta_2)E^A(\Delta_1)$$ for any pair of Borel sets $\Delta_1\subset\sigma(A)$, $\Delta_2\subset\sigma(B)$. What you need is a way to construct $E_A$ and $E_B$ directly from $A,B$. This is accomplished through Stone's Formula: $$\frac{1}{2}\left(E(a,b)x+E[a,b]x\right) \\ = \lim_{\epsilon\downarrow 0}\frac{1}{2\pi i} \int_{a}^{b}(A-(r+i\epsilon)I)^{-1}x-(A-(r-i\epsilon)I)^{-1}x dr$$ This is a contour integral around $[a,b]$ with the vertical pieces missing. Using strong limits you can isolate $E(a,b)x$ and $E[a,b]x$ through limits in $a$, $b$. You can actually do this in a constructive way using the $\tan^{-1}$ function to explicitly integrate and take the limit in the strong topology. If $AB=BA$, then $$(A-\lambda I)B = B(A-\lambda I) \\ (A-\lambda I)(B-\mu I)=(B-\mu I)(A-\lambda I) \\ (B-\mu I)(A-\lambda I)^{-1} = (A-\lambda I)^{-1}(B-\mu I) \\ (A-\lambda I)^{-1}(B-\mu I)^{-1}=(B-\mu I)^{-1}(A-\lambda I)^{-1}.$$ From this and Stone's formula, you have a constructive proof that the spectral measures of $A$ and $B$ commute. I think it helps to see how the spectral measure is constructively determined on intervals of $\mathbb{R}$.
2019-07-22 09:58:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996578693389893, "perplexity": 89.92662673309567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527907.70/warc/CC-MAIN-20190722092824-20190722114824-00196.warc.gz"}
http://math.boisestate.edu/m502/
# HW19 Exercises (not today) Supplemental problems 1. $\star$ Show that $2$ is not definable in $(\QQ,+)$. 2. $\star\star$ Kunen, exercise II.15.5. 3. $\star\star\star$ Show that + is not definable in $(\NN,s)$, where $s(n)=n+1$ denotes the successor function. 4. $\star\star\star$ Is the ordering $<$ definable in $(\QQ,+,\times)$? # HW18 Exercises (due Wednesday, April 23) 1. Show that the class of all finite graphs is not first-order axiomatizable (that is, there is no theory $\Sigma$ such that the models of $\Sigma$ are exactly the finite graphs). 2. Show that the class of all infinite graphs is not finitely axiomatizable (that is, there is no finite theory $\Sigma$ such that the models of $\Sigma$ are exactly the infinite graphs). Supplemental problems 1. $\star$ Show that $\RR$ and $\RR\smallsetminus\{0\}$ are not isomorphic as linear orders. 2. $\star\star$ Show that the class of connected graphs is not first-order axiomatizable. 3. $\star\star$ Kunen, exercise II.3.8. 4. $\star\star$ Kunen, exercise II.13.11. 5. $\star\star\star$ Kunen, exercise II.13.12. # HW17 Exercises (due Monday, April 21) 1. Show that relation defined by $\sigma\sim\tau$ if and only if $\Sigma\vdash\sigma=\tau$ is an equivalence relation. 2. Suppose $\Sigma$ is a complete theory with a finite model. Show that $\Sigma$ does not have any infinite models. Supplemental problems 1. $\star$ Suppose $\Sigma$ is a complete theory with an infinite model. Can $\Sigma$ have any finite models? 2. $\star\star$ Kunen, exercise II.12.23. 3. $\star\star$ Let TA (true arithmetic) be the theory of the structure $(\NN,+,\cdot,0,1)$. Show that TA has a model $N$ containing $\NN$ and containing elements “larger” than $\NN$. 4. $\star\star\star$ Show that every model of TA (from the previous problem) has a copy of $\NN$ as an initial segment, and that this copy has no supremum. # HW16 Exercises (due Monday, April 14) 1. Suppose that $P$ is a unary predicate and $Q$ is a propositional variable. Give a formal proof of the following: $(\forall x(P(x)\to Q))\to((\forall xP(x))\to Q)$. 2. Suppose that $R$ and $S$ are unary predicates. Use UG, EI and any other results you like to show there exists a formal proof of the following: $\forall x(R(x)\to S(x))\to(\exists x R(x)\to \exists x S(x))$. 3. Kunen, exercise II.11.16. Suppose $R$ is a binary predicate and use the soundness theorem to show that there does not exist a formal proof of $\forall y\exists x R(x,y)\to\exists x\forall y R(x,y)$. Supplemental problems 1. $\star$ Show that logical axiom 3 is valid. 2. $\star$ Give an example of a structure $A$ and a formula $\phi(x)$ such that $A\models\exists x\phi(x)$ but there is no term $\tau$ such that $A\models\phi(\tau)$. 3. $\star\star$ Kunen, exercise II.10.6. 4. $\star\star$ Kunen, exercise II.11.15. Give a formal proof from ZF that $\exists y\forall x(x\notin y)$. 5. $\star\star\star$ Kunen, exercise II.11.11. # HW15 Exercises (due Wednesday, April 9) 1. Show that the fourth structure on page 12 satisfies the formula $\forall x \exists y (yEx \wedge (\exists z) (z\neq x \wedge yEz))$ directly from the definition of $\models$. 2. Find a formula $\phi$ with one free variable $x$ such that $(\RR,+,\cdot)\models\phi[\sigma]$ iff $\sigma(x)=2$. Supplemental problems 1. $\star$ Show that if $\Sigma$ has an infinite model then $\Sigma$ has an uncountable model. 2. $\star\star$ Kunen, exercise II.7.19. 3. $\star\star$ Complete Exercise 2 with the number $2$ replaced by an arbitrary rational number $a/b$. 4. $\star\star\star$ What are the limits of the previous problem? 5. $\star\star\star$ Kunen, exercise II.7.20. 6. $\star\star\star$ Kunen, exercise II.7.21. # HW14 Exercises (due Monday, March 31) 1. Give a proof of Lemma II.5.4. 2. Let $L=\{E\}$ where $E$ is a binary relational symbol. Write a set of $L$-sentences $\Sigma$ such that the models of $\Sigma$ are precisely the equivalence relations with exactly $3$ equivalence classes. Supplemental problems 1. $\star$ Prove that the connectives $\vee$, $\wedge$, and $\leftrightarrow$ can all be defined using only $\neg$ and $\rightarrow$. 2. $\star\star$ Prove that $(\QQ,<)$ is isomorphic to $(\QQ\smallsetminus\{0\},<)$, but that $(\RR,<)$ is not isomorphic to $(\RR\smallsetminus\{0\},<)$. 3. $\star$ Find a formula $\phi$ (in the trivial language) such that every model of $\phi$ has size exactly $5$. 4. $\star\star$ Find a language $L$ and a set of $L$-sentences $\Sigma$ such that for all $n\in\NN$, there is a model of $\Sigma$ if and only if $n$ is even. 5. $\star\star\star$ Prove that any partial order $R$ on a finite set can be extended to a linear order $R’\supset R$ on that set. # HW13 Exercises (due Wednesday, March 19) 1. Convert the expressions from Polish to standard logical notation. • $\forall a\forall b\rightarrow=n\times ab\vee =na=nb$ • $\forall a\rightarrow\in aS\leq ab$ 2. For each of the following informal mathematical statements, define a Polish lexicon that would allow you to express the statement, and then do so. • The polynomial $x^4+3x+5$ has a root. • The number $n$ is the sum of four squares. Supplemental problems 1. $\star\star$ Kunen, exercise II.4.7 2. $\star\star\star$ Write a computer program that takes a Polish lexicon and string of symbols as input, and determines whether the given string is a well-formed expression. # HW12 Exercises (not today) Supplemental problems 1. $\star$ Kunen, exercise I.15.14. 2. $\star$ Kunen, exercise I.15.15. 3. $\star\star$ (via Andres) Let $S$ be the set of middle-third-intervals removed during the construction of the Cantor set. The elements of $S$ are strictly totally ordered from left to right. Show that $S$ with this ordering is isomorphic to $\QQ$ with its usual ordering. 4. $\star\star\star$ (via Andres) A train carries $\omega$ many passengers. It then passes $\omega_1$ many stations numbered $0,1,\ldots,\omega_1$. At each station one passenger gets off and then $\omega$ many passengers get on. How many passengers remain when the train pulls into the last station? 5. $\star\star\star$ (via Andres) Show in ZF that if there is an injective function $\omega\to P(X)$ then there is an injective function $P(\omega)\to P(X)$. # HW11 Exercises (not today) Supplemental problems 1. $\star$ Define $+$, $\times$, and $<$ on the rational numbers constructed in class. 2. $\star\star$ Define $+$, $\times$, and $<$ on the real numbers constructed in class. 3. $\star$ If $G_n$ are dense open subsets of $\RR$ then $\bigcap G_n$ is nonempty. [Hint: look up the Baire category theorem.] 4. $\star\star$ IF $G_n$ are dense open subsets of $\RR$ then $\bigcap G_n$ is uncountable and dense. 5. $\star\star$ Kunen, exercise I.15.10 (forget the last sentence). 6. $\star\star$ Kunen, exercise I.15.11 (forget the last sentence). 7. $\star\star$ Kunen, exercise I.15.12 (forget the last sentence). 8. $\star\star\star$ The last sentence of Kunen, exercises I.15.10–11. # HW10 Exercises (due Monday, March 10) 1. Kunen, exercise I.14.9. If you know the rank of $x$, then what is the rank of $\bigcup x$? (As always, see the text for a more precise problem statement.) 2. For any $\alpha$ the Union axiom holds in $V_\alpha$. 3. If $\alpha$ is a limit then the Pairing axiom holds in $V_\alpha$. Supplemental problems 1. $\star$ If $\alpha$ is a limit then the Power Set axiom holds in $V_\alpha$. 2. $\star$ If $\alpha$ is a limit and $x,y\in V_\alpha$ and $f$ is a function from $x$ to $y$ then $f\in V_\alpha$. 3. $\star\star$ If $\kappa$ is inaccessible then $|V_\kappa|=\kappa$ 4. $\star\star$ Kunen, exercise I.14.14 5. $\star\star$ Kunen, exercise I.14.17 6. $\star\star\star$ Kunen, exercise I.14.19 7. $\star\star\star$ The rest of Kunen, exercise I.14.21. If $\gamma>\omega$ is a limit then $V_\gamma$ is a model of ZC. On the other hand Replacement does not hold in $V_{\omega+\omega}$. 8. You may also attempt any of the other exercises in this section.
2018-03-20 05:52:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658329486846924, "perplexity": 473.8814844066774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00630.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-13-counting-and-probability-13-2-permutations-and-combinations-13-2-asses-your-understanding-page-855/23
## Precalculus (10th Edition) The ordered arrangements: $abc,abd,abe,acb,acd,ace,adb,adc,ade,aeb,aec,aed$ $bac,bad,bae,bca,bcd,bce,bda,bdc,bde,bea,bec,bed$ $cab,cad,cae,cba,cbd,cbe,cda,cdb,cde,cea,ceb,ced$ $dab,dac,dae,dba,dbc,dbe,dca,dcb,dce,dea,deb,dec$ $eab,eac,ead,eba,ebc,ebd,eca,ecb,ecd,eda,edb,edc$ $P(5,3)=60$ The ordered arrangements: $abc,abd,abe,acb,acd,ace,adb,adc,ade,aeb,aec,aed$ $bac,bad,bae,bca,bcd,bce,bda,bdc,bde,bea,bec,bed$ $cab,cad,cae,cba,cbd,cbe,cda,cdb,cde,cea,ceb,ced$ $dab,dac,dae,dba,dbc,dbe,dca,dcb,dce,dea,deb,dec$ $eab,eac,ead,eba,ebc,ebd,eca,ecb,ecd,eda,edb,edc$ We know that $P(n,r)=n(n-1)(n-2)...(n-k+1).$ Also $P(n,0)=1$ by convention. Hence, $P(5,3)=5\cdot4\cdot3=60$.
2021-10-16 05:37:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5607620477676392, "perplexity": 729.9490749813327}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00182.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5023890
• Create Account ### #ActualAshaman73 Posted 21 January 2013 - 08:09 AM In general, ATI and NVIDIA will handle not clearly defined glsl code differently. Nvidia is known for more lax handling of glsl syntax, whereas ATI often requires strict one. Best to compile and run your GLSL code as often as possible on both platforms to detect errors early enough. Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } ### #2Ashaman73 Posted 21 January 2013 - 08:08 AM In general, ATI and NVIDIA will handle not defined glsl code differently. Nvidia is known for more lax handling of glsl syntax, whereas ATI requires strict one. Best to compile and run your GLSL code as often as possible on both platforms to detect errors early enough. Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } ### #1Ashaman73 Posted 21 January 2013 - 08:05 AM Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } PARTNERS
2013-12-11 09:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275121569633484, "perplexity": 12656.048964260703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164033950/warc/CC-MAIN-20131204133353-00089-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mindmatters.ai/2022/09/page/4/
Mind Matters Natural and Artificial Intelligence News and Analysis # Monthly Archive September 2022 ## Amazon’s Rings of Power: Some Warning Signs But Still Hope The screenwriters had to create dialogue from Tolkien’s notes about the world in which Lord of the Rings is set ## Taiwan Has Bet Its Uncertain Future on Advanced Microchips An increasingly belligerent China has long claimed to own Taiwan, which manufactures 90% of the world’s *advanced* microchips Taiwan is the world’s largest manufacturer of microchips*, and not just by a small margin. Taiwan manufactures 65% of the microchips used in everything from smartphones to missiles. This compares to the U.S. at 10% and China at 5%. South Korea and Japan produce the rest. More important, Taiwan manufactures 90% of the world’s advanced microchips. In other words, without Taiwan, the world’s supply of microchips would come to a standstill, something that has been keenly felt since 2021 when chip shortages affected the auto industry. So far, the world’s dependence on Taiwan’s chips has protected the self-governing island nation from a potential invasion or ruinous trade sanctions from China. Earlier, we looked at U.S. House Speaker Nancy Pelosi’s visit Read More › ## Can Religion Without Belief “Make Perfect Sense”? Philosopher Philip Goff, a prominent voice in panpsychism, also defends the idea of finding meaning in a religion we don’t really believe Durham University philosopher Philip Goff, co-editor of Is Consciousness Everywhere? Essays on Panpsychism (November 1, 2022), has an interesting take on religion. While it’s common to assume that religious people are “believers,” he thinks that people can meaningfully be part of a religion without actually believing in it: But there is more to a religion than a cold set of doctrines. Religions involve spiritual practices, traditions that bind a community together across space and time, and rituals that mark the seasons and the big moments of life: birth, coming of age, marriage, death. This is not to deny that there are specific metaphysical views associated with each religion, nor that there is a place for assessing how plausible those views Read More › ## The Vector Algebra Wars: A Word in Defense of Clifford Algebra A well-recognized, deep problem with using complex numbers as vectors is that they only really work with two dimensions Vector algebra is the manipulation of directional quantities. Vector algebra is extremely important in physics because so many of the quantities involved are directional. If two cars hit each other at an angle, the resulting direction of the cars is based not only on the speed they were traveling, but also on the specific angle they were moving at. Even if you’ve never formally taken a course in vector algebra, you probably have some experience with the easiest form of vector algebra — complex numbers (i.e., numbers that include the imaginary number i). In a complex number, you no longer have a number line, but, instead, you have a number plane. The image below shows the relationship between the real Read More › ## Don’t Worship Math: Numbers Don’t Equal Insight The unwarranted assumption that investing in stocks is like rolling dice has led to some erroneous conclusions and extraordinarily conservative advice My mentor, James Tobin, considered studying mathematics or law as a Harvard undergraduate but later explained that I studied economics and made it my career for two reasons. The subject was and is intellectually fascinating and challenging, particularly to someone with taste and talent for theoretical reasoning and quantitative analysis. At the same time it offered the hope, as it still does, that improved understanding could better the lot of mankind. I was an undergraduate math major (at Harvey Mudd, not Harvard) and chose economics for the much the same reasons. Mathematical theories and empirical data can be used to help us understand and improve the world. For example, during the Great Depression in the 1930s, governments everywhere had so Read More › ## Analysis: Can “Communitarian Atheism” Really Work? Ex-Muslim journalist Zeeshan Aleem, fearing that we are caught between theocracy and social breakdown, sees it as a possible answer Zeeshan Aleem, an American journalist raised as a Muslim — but now an atheist — views his country as caught between “the twin crises of creeping theocracy and the death of conventional religion.” He seeks a new kind of atheism — communitarian atheism — as part of a solution: A rapidly increasing share of Americans are detaching from religious communities that provide purpose and forums for moral contemplation, and not necessarily finding anything in their stead. They’re dropping out of church and survey data suggests they’re disproportionately like to be checked out from civic life. Their trajectory tracks with a broader decades-long trend of secular life defined by plunging social trust, faith in institutions, and participation in civil society. My Read More › ## There Really Is a “Batman” and He Isn’t in the Comics Daniel Kish lost both eyes to cancer as a baby. With nothing to lose, he discovered human echolocation Perhaps one should not really say that Daniel Kish “discovered” human echolocation. Yet, having no other options as a blind infant cancer survivor, he discovered early on — and began to publicize — a sense that few sighted persons would even think of: He calls his method FlashSonar or SonarVision. He elaborated for the BBC: Do people need to be blind to do it? Not necessarily: In 2021, a small study led by researchers at Durham University showed that blind and sighted people alike could learn to effectively use flash sonar in just 10 weeks, amounting to something like 40 to 60 hours of total training. By the end of it, some of them were even better at specific tests Read More › ## News From the Search for Extraterrestrial Life 3 The Webb gets a good closer look at an exoplanet Exoplanets are hard to spot but the James Webb Space Telescope got an image of one (HIP 65426b), reported September 1: The planet is more than 10,000 times fainter than its host star and about 100 times farther from it than Earth is from the Sun (~93 million miles), so it easily could be spotted when the telescope’s coronagraphs removed the starlight. The exoplanet is between six and 12 times the mass of Jupiter—a range that could be narrowed once the data in these images is analyzed. The planet is only 15 million to 20 million years old, making it very young compared to our 4.5-billion-year-old Earth. Isaac Schultz, “See Webb Telescope’s First Images of an Exoplanet” at Gizmodo (September Read More › ## Madness: Why Sci-Fi Multiverse Stories Often Feel Boring In a multiverse, every plot development, however implausible, is permitted because we know it won’t affect our return to the expected climax Filmmakers communicate with audiences using common and accepted story devices (tropes) that viewers identify with — maybe the “average person takes the crown” or “love triangle.” Some tropes are overused or used in ways that undermine the story. In discussing what I think went wrong with Dr. Strange in the Multiverse of Madness (2022) and some similar films, I’ll use the word trope to refer to any story element that is used to push the plot. I find four tropes particularly annoying: the Multiverse, Time Travel, the Liar Revealed, and the MacGuffin Chase. Because I’ve just finished reviewing Multiverse of Madness, let’s start with the Multiverse trope. Before reviewing the Dr. Strange sequel, I’d written an essay, “Dr. Strange: Can Read More ›
2023-03-24 07:16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.259904682636261, "perplexity": 3511.3275208620826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00125.warc.gz"}
http://mathhelpforum.com/algebra/214361-basic-arithmetic-surds-print.html
# Basic Arithmetic and Surds • March 6th 2013, 09:31 PM alicesayde Basic Arithmetic and Surds Express (2root5 - 3root10)^2 in the form a+broot2 and hence find the values of a and b. If you can please help and explain the method and how to solve equations similar to this. :) Thanks • March 6th 2013, 10:47 PM veileen Re: Basic Arithmetic and Surds That is not an equation. Anyway: $(a-b)^2=a^2-2ab+b^2$, so: $(2\sqrt 5-3\sqrt {10})^2=(2\sqrt 5)^2-2\cdot 2\sqrt 5 \cdot 3\sqrt {10} + (3\sqrt {10})^2=$ $=4\cdot 5-12\sqrt{5\cdot 10}+3\cdot 10=20-12\cdot 5 \sqrt 2 +30=50-60\sqrt 2$ a = 50, b = -60 • March 6th 2013, 10:49 PM MINOANMAN Re: Basic Arithmetic and Surds Alice show us some of your work...apply the well known identity (a-b)^2 =a^2+b^2-2ab....where a and b could be anything...in your case 2root5 or 2root10......try it
2015-02-01 09:58:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43890050053596497, "perplexity": 3395.9644460752434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120453043.42/warc/CC-MAIN-20150124172733-00204-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/39721-triangle-problem-sines.html
# Thread: Triangle Problem with sines 1. ## Triangle Problem with sines A forest fire is spotted from two fire towers. the triangle determined by two towers and the fire has angles of 28 degrees and 37 degrees at the tower vertices. If the towers are 3000 meters apart, which one is closer to the fire. 2. Originally Posted by victorfk06 A forest fire is spotted from two fire towers. the triangle determined by two towers and the fire has angles of 28 degrees and 37 degrees at the tower vertices. If the towers are 3000 meters apart, which one is closer to the fire. 1. Draw a sketch 2. The angle at the fire tower is 115°. 3. Use Sine Rule: $\frac{x}{3000}=\frac{\sin(37^\circ)}{\sin(115^\cir c)}$ $\frac{y}{3000}=\frac{\sin(28^\circ)}{\sin(115^\cir c)}$ 4. Without any calculations it is obvious that y < x because sin(28°) < sin(37°)
2017-10-19 05:25:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6964342594146729, "perplexity": 1319.3309720613343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00402.warc.gz"}
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
2