Tumgik
#the lower level of 'could learn from context' was 80%+ comprehension
rigelmejo · 1 year
Text
I don’t have the link at the moment, but on r/languagelearning on reddit someone mentioned a statistic I’ve also heard before: that the minimum for reading and comprehending the gist overall main idea (and therefore being able to learn at least Some new vocabulary/grammar from context) requires 80-92% comprehension of a text. 
So, one, it makes the suggestion to learn ‘2000 common words’ somewhat useful. 2000 common words covers around ~80% comprehension of text in many languages, meaning if you’ve studied those 2000 words you’re more likely to find texts you comprehend Enough of to follow the main idea. And meaning if you do not comprehend enough, you’ve got a better chance of looking up some key words to get you up to at least that 80% comprehension. And, if you’re going to learn by extensive reading, at least hitting the 80% comprehension mark is going to make learning more doable. (Although true, it will be quite draining and frustrating if you have a low tolerance for ambiguity and you’re trying to read stuff you only comprehend 80% of, it is at least doable). 
Two, that point from 80% comprehension to 95-98% comprehension may feel brutal (in my opinion). Because it is when reading texts for native speakers roughly becomes doable, but it won’t necessarily be easy or clearly understood until you learn a bulk of words that make the difference between those percents of comprehension. Once you get past learning the common words, the vocabulary increases you make do not increase your comprehension % nearly as much. You can learn a few hundred words and see a huge improvement in comprehension (like 50%), you can learn 2000 common words and see a huge improvement in comprehension (around 80%!). Then you get to the next thousand and next thousand and the percent comprehended isn’t going up as quickly anymore. What you comprehend starts to depend on if you learned words in that subject before or not, if you’re brand new to that topic domain or not. 
I am really fascinated by this 80-92% comprehension is when you start grasping the overall main gist of what’s going on. Because when I learned French, and now Chinese, I remember directly experiencing it. This is about when I gave up ALL word lookup in french and relied solely on extensive reading (which worked). This is also about where I started giving up word lookup in Chinese shows and manhua, moving to solely extensive watching/reading. It’s also where I started to be able to slightly push into reading novels (though I would not really try extensive reading until another 1000 words were learned, and I’m maybe at 4000 words known or more? Probably between 2000-2500 hanzi known anyway, and now I’d consider some extensive reading quite enjoyable and some doable but draining). I do think for someone with a high tolerance for ambiguity, and who enjoys just diving in and Doing stuff, 80% comprehension is that minimum you want to reach when moving into studying by Doing reading/watching extensively. (And of course, using materials you’re already familiar with like watched before in your native language is going to somewhat make comprehension higher, helping with learning from context).
4 notes · View notes
bombardthehq · 4 years
Text
Hallucinations as top-down effects on perception Powers et al, 2016, read 27-28.06.20
a review of the state of the literature of top-down effects on perception in neuroscience; and then applies it to hallucinations (we're reading for the first part, but we'll take the second too)
They say 'present-day cognitive scientists' argue cognition does *not* influence perception. They cite: Firestone C, Scholl BJ (2015): Cognition does not affect perception: Evaluating the evidence for 'top-down' effects [in sense do they use 'perception'? perhaps more like Roftopoulos restricted sense? also: this shouldn't be seen as argument against the penetrability of 'perceptual belief' in Lyons 2011]
but 'work in computational neuroscience' challenges this view, & they also think hallucinations pose a challenge to 'srict, encapsulated modularity' [Fodor]; they'll illustrate it with 'phenomenology(!) and neuro-computational work'
MODULES OF THE MIND
Fodor shout out! they give a summary of his modular parsers - "for example, theearly vision module takes in ambient lightand outputs color representations" - which are cognitively penetrable only in a very strict, delineated way.
Fodor's modules are annoying for scientists because they're pooly defined, so difficut to falsify. Some 'ultra-cognitive neuropsychologists' even claim that the brain 'hardware' is irrelevant to the 'software' they're interested in & resist empirical evidence!
a strict modular approach requires 'functional segregation', with different parts of the brain doing different things inaccessibly to one another, but evidence supports an 'integrationism' of the brain [we saw this with the knots "Vetter & Newen" were tying themselves into]
the authors prefer "predictive coding" [like O'Callaghan et al], which they use to model the integrated mind via "functional and effective connectivity data" [a whole new language of buzzwords to learn!]
PREDICTIVE PERCEPTION IMPLIES COGNITIVE PENETRATION
while perception that corresponds to truth would be adaptive, perception that allows misbelief could also be adaptive if the misbeliefs are adaptive
we might, per Hume & Heimholtz, 'perceive what would need to be there for our sensations to make sense'
so the brain uses both bottom-up information and top-down inferences, as Heimholtz argued [fascinating - who was that guy?]
it uses the top-down inferences to 'compute precision-weighted prediction errors' to arrie at 'an optimal estimation' - cite a bunch of 'predictive coding' & 'attention' papers
top-down has a long history in neuroscience, from the 80s
Friston K (2005): A theory of cortical responses -- the origin of 'predictive coding'
contra Fodor, some studies claim that 'early visual processing' ('perception' in Roftopoulos) is influenced by 'non-perceptual information' ... "semantic priming increasesspeed and accuracy of detection by minimizing prediction error" ... " Word contexts result inambiguous shapes being perceived as themissing letters that complete aword" ... a bunch of others
THE BURDEN OF PROOF: ESTABLISHING TOP-DOWN INFERENCES IN PERCEPTION
they go over Firestone & Scholl's criticisms of the 'new look' research
they say that they're plagued w/ problems that can be avoided by following these guidelines:
1. Disentangle perceptual from decisional processes 2. Dissociatereaction time effects from primary perceptual changes 3. Avoid demand characteristics 4. Ensure adequate low-level stimulus control 5. Guarante eequal attentional allocation across conditions.
these issues are inherent to tasks where perception guides a behaviour decision (so research would have to be done without that)
but a 'Bayesian formulation' doesn't permit this distinction; 'Signal Detection Theory' appears to, but it also allows cognition to influence perception.
"Top-down processes can even alter the mechanical properties of sensory organs by alteringthe signal-to-noise ratio" [wow]
they will argue that top-down influence is clearest 'when sensory input is completely absent'-- 'when experiences are hallucinated'
HALLUCINATIONS AS EXAMPLES OF TOP-DOWN PENETRATION
Hallucinations can be consistent w/ affective states; guilt & disease when depressed, etc.
hallucinations are fairly common in even 'non-clinical' cases; they occur in 28% of the population -- hallucinations may be 'an extreme of normal functioning', not a 'failure of modularity'
they give some support for hallucinations being top-down: "prior knowledge of a visual scene conferredanadvantage in recognizing a degraded version of that image" & patients at risk for psychosis were 'particularly susceptible to this advantage'; similarly, patients who were taught to associate a difficult-to-detect noise w/ a visual stimuli began hearing it when shown the visual w/out the noise -- esp. patients 'who hallucinate'
experiences of uncertainty increase the influence of top-down
they feel that studying penetrability via hallucinatory experiences gets around the problems Firestone & Scholl identify; neuroimaging might do it too
now they'll try to integrate this understanding of halluciations as top-down w/ 'notions of neural modularity and connectivity'
BRAIN LESIONS, MODULARITY, CONNECTIVIT AND HALLUCINATIONS
They propose that "inter-regional effects" mediate top-down influence on perception
these are often discussed in terms of 'attention'; 'predictive coding' theory conceives of attention as 'the precision of priors' and 'prediction errors'
a bit of statistics jargon for modelling we don't care about, although they make the interesting equivalence between 'change over time' (uncertainty) and 'predictive relationship between states' (reliability) [difference & repetition baby!] -- the gist is that all this stuff is a promising, plausible explanation of some difficult areas of the data but ['precision weighting'] is still waiting on more empirical trials
so someone walking home after watching a scary movie might have 'precise' enough 'priors', ie. a strongly-weighted 'background theory' (in Fodor's terms), to actually see the shadows on the street as being darker than they are... & if they were precise enough, strong enough, they'd really hallucinate
their support: a single case where a lesion caused hallucinations; 'functional connectivity' between the lesion location ad othre regions; 'effective (directioal) connectivity' in patients w/ Audo-Visual Hallucinations
they'll use these to argue that 'top-down priors' influence perception, contra strict encapsulation
1. lesion-induced hallucinosis
with 'graph-theory' fMRIs of the brain are parsed into 'hubs (sub-networks)', with a subset of regions connecting those sub-networks ('connectors'). see:
Tumblr media
lesions are more likely in 'rich-club hubs', regions that mediate long-range connectivity between connected information processing hubs
the limbic system is a rich-club hub & has been implicated in 'the global specification of' precision weighting
it is not, however, part of *early perception*; they'll instead show "regions like orbitofrontal cortexpenetrate perceptual processing in primary sensory cortices giving rise to hallucinations"
~this part gets very heavy on the neuroscience & is beyond me - but the gist is that they're able to look at which hubs do what & how that gets disrupted by lesioins. It appears that there are definitely such things as modules like Fodor's parser, responsible for different faculties, & which parts of the brain these are found in is well settled - its just that these seem to be cognitively penetrable bc of how they behave with lesions. However, these are not 'proof' of it, just 'candidate' explanations for penetration
2. lesion effects on graph theory metrics
re: connectivity, lesions are more disruptive, & can be disruptive of the whole brain, when they occur in between-module connectios (rich club hubs); & they alter connectivity in opposing, un-lesioned hemispheres [this is a challenge for 'cognitive neuropsychology' - the sophist-like 'cognitivists' from before]
"We suggest that the rich-club hubs that alter global network function ... are also the hubs involved in specifying global precision and therefore updating of inference in predictive coding" -- & thats how early perception is cognitively penetrated (ie. 'higher' priors re: precision are mediated by the same stuff that mediate 'predictive coding' in early perception) [note this is a 'suggestion', but they do give a study in support]
there *may* be a connection w/ schizophrenia and lesions in these areas, but it hasnt really been shown yet; but some neuropsychiatrists do work off of this
"In our predictive coding approach informational integration (between modules) is mediated via precision weighting of priors and prediction errors, perhaps through rich club hubs" -- but "the exact relationshipbetween psychological 9modularity and modularity in functional connectivity remains an open empirical question."
ie. percetion is cognitively penetrated because 'predictive coding' (used in early perception) is mediated by a 'precision weighting' of 'priors and prediction errors' via rich club hubs
3. directional effects
'Dynamic causal modeling' (DCM) is a way of looking for 'directional' connectivity in fMRI data
one study examining 'inner speech processing' found very little connectivity "from Wernicke’s to Broca’s areas" in schizophrenic patients w/ auditory hallucinations (vs. schizophrenic patients without them) -- suggesting 'precision of processing in Broca's was higher than in Wernicke's'
they say that this data is consstent w/ informaton from 'higher' regions penetrating lower regions
[Wernicke's area is involved in comprehension of written & spoken language, while Broca's area is involved in the production of language; the idea here is that the patients who experienced auditory hallucinations woud also, when processing language, rely more on the higher level functions of Broca's area for precision weighting and much less so on the earlier perception of Wernicke's area]
'predictions' are top-down (ie. 'flow from less to more laminated cortices') while 'prediction errors' are bottom-up (the opposite) [what are 'prediction errors'? maybe like an 'error warning'?]
a lot of neuroscience stuff about the insula, priors, and lots of things I dont understand, which I dont need to note; the conclusion is tat they speculative that rich club hubs are "well placed to implement changes in gain control as a function of the precision of predictions and prediction errors." [ie. rich club hubs are the 'court' and 'court of appeals' of the brain, 'hearing' prediction & prediciton errors & itself 'sentencing' gain control changes]
another paragraph of studies showing similar things, this time with 'bi-stable perception', percepts that switch dominance 'on their own' (without a change in sensory input) -- this happes more in schizophrenics, but currently hasnt been looked at w/r/t hallucinations specifically
DISCUSSION & FUTURE DIRECTIONS
a summary of the above
their argument is that the data is inconsistent with 'an encapsulated modularity of mind'
w/r/t hallucinations, it looks like the top-down 'gain control mechanisms' ... 'sculpt' perceptions even in the absence of sensation
perception is cognitively penetrated insofar as it minimizes 'overall long-term' prediction error; so the knowing how the Muller-Lyre illusion works doesn't act on my perception because 'the illusion is Bayes optimal' - seeing in this way is more *overall long term* precise
there is some contradiction about schizophrenics & their tendency to perceive illusions - sometimes it works less, sometimes more. Thye say that this cannot be generalized & ought to be treated case by case; there is a *hierarchy* of perceptual systems and 'informaton processing can be impaired at different levels of the hierarchy'
so illusions might fail at a lower level in the hierarchy while hallucinations are generated at a higher level
they discuss work they did w/ ketamine; it doesnt normally cause hallucinations, but they found that it did in the MRI scan which is 'perceptually denuded (dark, still, rythmically noisy)'
ketamine enhances 'bottom up noise'; they argued then that sensory deprivation induces hallucination via top-down priors. "This is similar to the paradoxical effect of hearing loss and vision loss on hallucinations."
higher level precision increases to compensate for lower level prediction errors
so the increased bottom-up feed of ketamine creates prediciton errors when sensory deprived & this produces halluciations -- the priors top-down predictively organizing the error-filled bottom-up feed
so in general, hallucinations are produced by 'the dynamic interaction between priors and prediction errors'
they hint at some arguments that are strongly consonant with our own experiences of schizophrenia. Fist, that it is possible to 'conjure up' hallucinations at will. Secondly, that there are two types of hallucination - those 'with insight' (accompanied by a sense of unreality), and those 'without insight' (which feel as real as any other percept). We have always argued both of these things. [We have always argued that schizophrenia involves a kind of top-down *compulsion*, ie. I *have to* conjure this...]
2 notes · View notes
tcifiscal · 5 years
Text
Combined Reporting: A Key Tool to Limit Corporate Tax Avoidance
Next week, the Northam administration will give their annual presentation about the state’s finances to the legislature’s money committees. Preliminary reports indicate that state revenues came in above the official forecast for the fiscal year that ended in June. Although the numbers are higher than projected, Virginia’s revenues have not been keeping pace with overall economic growth. Investments require resources, and the state has many unmet budget needs, such as fully funding K-12 education. One way lawmakers can increase revenue is by reversing the erosion of Virginia’s corporate tax base. Currently, Virginia applies a flat 6% corporate income tax on corporate profits earned in the state, but how those “profits” are measured and taxed is not always straightforward.
Today, many large, multi-state corporations are able to use accounting maneuvers and exploit weaknesses in state tax structures to reduce their state tax bills, which puts a strain on state budgets. Corporations can relatively easily shift profits to states that tax it at lower rates – or not at all. For example, manufacturing corporations can set up subsidiary companies in low-tax states as suppliers of factory inputs. The parent company then pays the subsidiary an artificially high price for those inputs, which then get deducted as a business expense, reducing the parent company’s tax liability.
To counter this and similar practices, 28 states and D.C. have passed “combined reporting” laws. Combined reporting requires corporations to add up all of the income from the parent company and most or all of its subsidiaries. The member corporation(s) doing business in Virginia then would report the appropriate share of the combined profit on its state tax returns. The end result is that many common tax avoidance strategies are negated.
Tumblr media
Enacting this reform also ends a tax advantage that locally-based corporations without out-of-state subsidiaries do not get. In terms of tax fairness, it places businesses on a more level playing field. This change would make sure large corporations pay their fair share in taxes, allowing the state to invest more in critical priorities. And for many of these large corporations, they already adhere to combined reporting requirements when filing taxes in one of the other states that has adopted this policy, so this will not be a new practice for them.
By enacting combined reporting, Virginia could gain between $80 million and $165 million a year in additional revenue.
For context, with $165 million in new revenues the state could:
hire sufficient school counselors to get us to nationally recommended caseloads ($88.2 million)
fund the state’s share of pre-K for almost 13,000 four year-olds ($51.6 million), which equals the number of kids in families with an income below 200% of the federal poverty line (roughly $42,700 for a family of three) that currently do not have access to preschool.
Or the state could choose to make significant improvements to increase health care access by: 
eliminating the “40-quarter rule” for legally residing immigrants, which would remove a barrier to coverage for many in the state ($6.5 million)
improving dental access for adults with low incomes by adding a comprehensive dental benefit for Medicaid ($25 million before expansion) 
investing in health coverage for all children from low-income families, regardless of immigration status. We estimate this coverage to cost roughly $68 million annually if all children enroll based on most recently available data.
Alternatively, the state could: 
cover most of the cost of raising the wages of every state employee and employees of state contractors to $15 an hour, helping make sure that public servants are being paid enough to live with dignity.
In the past, there has been some legislative interest in adopting combined reporting in Virginia. During the 2010 and 2012 General Assembly sessions, bills were introduced that would have required combined reporting for state tax purposes. And during the 2017 session, a proposal (HJ 638) would have directed the state’s Department of Taxation to research combined reporting and develop recommendations for its implementation in Virginia. None of these measures advanced.
As lawmakers take stock of the state’s revenue and budget picture, they also need to take steps to make sure Virginia has a modern revenue system – one that keeps pace with changes in the overall economy. As corporate structures have become more complex, combined reporting is a critical policy tool with which to limit corporate tax avoidance.
– Chris Wodicka, Policy Analyst
Print-friendly Version (pdf)
Learn more about The Commonwealth Institute at www.thecommonwealthinstitute.org
0 notes
suzanneshannon · 5 years
Text
Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing
As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.
I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net.
That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means:
hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz
There's a few things to consider when dealing with reverse proxies like this and I've written about that in detail in this article on Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies.
You can and should read in detail about Azure Front Door here.
It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing.
On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally.
Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future.
CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app.
Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes.
After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so.
There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.
app.UseHealthChecks("/healthcheck");
Next I turned the Interval WAY app so it wouldn't bug me every few seconds.
These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging."
After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost.
I haven't decided if I want to set up Front Door's caching or not, but it might mean an easier, more central way than using a CDN manually and changing the URLs for my sites static content and images. In fact, the POP (Point of Presense) locations for Front Door are the same as those for Azure CDN.
NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service.
There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing.
I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments.
Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing published first on https://deskbysnafu.tumblr.com/
0 notes
philipholt · 5 years
Text
Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing
As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.
I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net.
That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means:
hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz
There's a few things to consider when dealing with reverse proxies like this and I've written about that in detail in this article on Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies.
You can and should read in detail about Azure Front Door here.
It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing.
On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally.
Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future.
CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app.
Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes.
After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so.
There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.
app.UseHealthChecks("/healthcheck");
Next I turned the Interval WAY app so it wouldn't bug me every few seconds.
These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging."
After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost.
I haven't decided if I want to set up Front Door's caching or not, but it might mean an easier, more central way than using a CDN manually and changing the URLs for my sites static content and images. In fact, the POP (Point of Presense) locations for Front Door are the same as those for Azure CDN.
NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service.
There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing.
I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments.
Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing published first on http://7elementswd.tumblr.com/
0 notes
lopezdorothy70-blog · 5 years
Text
Forced Water Fluoride Poisoning: More People in U.S. Drink Fluoride-adulterated Water Than All Other Countries Combined
Tumblr media Tumblr media
Image source.
U.S. Water Fluoridation: A Forced Experiment that Needs to End
By the Children's Health Defense Team
The United States stands almost entirely alone among developed nations in adding industrial silicofluorides to its drinking water-imposing the community-wide measure without informed consent.
Globally, roughly 5% of the population consumes chemically fluoridated water, but more people in the U.S. drink fluoride-adulterated water than in all other countries combined.
Within the U.S., just under a third (30%) of local water supplies are not fluoridated; these municipalities have either held the practice at bay since fluoridation's inception or have won hard-fought battles to halt water fluoridation.
The fluoride chemicals added to drinking water are unprocessed toxic waste products-captured pollutants from Florida's phosphate fertilizer industry or unregulated chemical imports from China.
The chemicals undergo no purification before being dumped into drinking water and often harbor significant levels of arsenic and other heavy metal contamination; one researcher describes this unavoidable contamination as a
“regulatory blind spot that jeopardizes any safe use of fluoride additives.”
Dozens of studies and reviews-including in top-tier journals such as The Lancet-have shown that fluoride is neurotoxic and lowers children's IQ. Fluoride is also associated with a variety of other health risks in both children and adults.
However, U.S. officialdom persists in making hollow claims that water fluoridation is safe and beneficial, choosing to ignore even its own research!
A multimillion-dollar longitudinal study published in Environmental Health Perspectives in September, 2017, for example, was largely funded by the National Institutes of Health and National Institute of Environmental Health Sciences-and the seminal study revealed a strong relationship between fluoride exposure in pregnant women and lowered cognitive function in offspring.
Considered in the context of other research, the study's implications are, according to the nonprofit Fluoride Action Network, “enormous”-“a cannon shot across the bow of the 80 year old practice of artificial fluoridation.”
A little history
During World War II, fluoride (a compound formed from the chemical element fluorine) came into large-scale production and use as part of the Manhattan Project.
According to declassified government documents summarized by Project Censored, Manhattan Project scientists discovered early on that fluoride was a “leading health hazard to bomb program workers and surrounding communities.”
In order to stave off lawsuits, government scientists:
“embarked on a campaign to calm the social panic about fluoride…by promoting its usefulness in preventing tooth decay.”
To prop up its “exaggerated claims of reduction in tooth decay,” government researchers began carrying out a series of poorly designed and fatally flawed community trials of water fluoridation in a handful of U.S. cities in the mid-1940s.
In a critique decades later, a University of California-Davis statistician characterized these early agenda-driven fluoridation trials as:
“especially rich in fallacies, improper design, invalid use of statistical methods, omissions of contrary data, and just plain muddleheadedness and hebetude.”
As one example, a 15-year trial launched in Grand Rapids, Michigan in 1945 used a nearby city as a non-fluoridated control, but after the control city began fluoridating its own water supply five years into the study, the design switched from a comparison with the non-fluoridated community to a before-and-after assessment of Grand Rapids.
Fluoridation's proponents admitted that this change substantially “compromised” the quality of the study.
In 1950, well before any of the community trials could reach any conclusions about the systemic health effects of long-term fluoride ingestion, the U.S. Public Health Service (USPHS) endorsed water fluoridation as official public health policy, strongly encouraging communities across the country to adopt the unproven measure for dental caries prevention.
Describing this astonishingly non-evidence-based step as “the Great Fluoridation Gamble,” the authors of the 2010 book, The Case Against Fluoride, argue that:
“Not only was safety not demonstrated in anything approaching a comprehensive and scientific study, but also a large number of studies implicating fluoride's impact on both the bones and the thyroid gland were ignored or downplayed” (p. 86).
In 2015, Newsweek magazine not only agreed that the scientific rationale for putting fluoride in drinking water was not as “clear-cut” as once thought but also shared the “shocking” finding of a more recent Cochrane Collaboration review, namely, that there is no evidence to support the use of fluoride in drinking water.
Bad science and powerful politics
The authors of The Case Against Fluoride persuasively argue that “bad science” and “powerful politics” are primary factors explaining why government agencies continue to defend the indefensible practice of water fluoridation, despite abundant evidence that it is unsafe both developmentally and after “a lifetime of exposure to uncontrolled doses.”
Comparable to Robert F. Kennedy, Jr.'s book, Thimerosal: Let the Science Speak, which summarizes studies that the Centers for Disease Control and Prevention (CDC) and “credulous journalists swear don't exist,” The Case Against Fluoride is an extensively referenced tour de force, pulling together hundreds of studies showing evidence of fluoride-related harm.
The research assembled by the book's authors includes studies on fluoride biochemistry; cancer; fluoride's effects on the brain, endocrine system and bones; and dental fluorosis.
With regard to the latter, public health agencies like to define dental fluorosis as a purely cosmetic issue involving “changes in the appearance of tooth enamel,” but the International Academy of Oral Medicine & Toxicology (IAOMT)-a global network of dentists, health professionals and scientists dedicated to science-based biological dentistry-describes the damaged enamel and mottled and brittle teeth that characterize dental fluorosis as “the first visible sign of fluoride toxicity.”
The important 2017 study that showed decrements in IQ following fluoride exposure during pregnancy is far from the only research sounding the alarm about fluoride's adverse developmental effects.
In his 2017 volume, Pregnancy and Fluoride Do Not Mix, John D. MacArthur pulls together hundreds of studies linking fluoride to premature birth and impaired neurological development (93 studies), preelampsia (77 studies) and autism (110 studies).
The book points out that rates of premature birth are “unusually high” in the United States.
At the other end of the lifespan, MacArthur observes that death rates in the ten most fluoridated U.S. states are 5% to 26% higher than in the ten least fluoridated states, with triple the rate of Alzheimer's disease. A 2006 report by the National Research Council warned that exposure to fluoride might increase the risk of developing Alzheimer's.
The word is out
Pregnancy and Fluoride Do Not Mix shows that the Institute of Medicine, National Research Council, Harvard's National Scientific Council on the Developing Child, Environmental Protection Agency (EPA) and National Toxicology Program all are well aware of the substantial evidence of fluoride's developmental neurotoxicity, yet no action has been taken to warn pregnant women.
Instead, scientists with integrity, legal professionals and the public increasingly are taking matters into their own hands. A Citizens Petition submitted in 2016 to the EPA under the Toxic Substances Control Act requested that the EPA “exercise its authority to prohibit the purposeful addition of fluoridation chemicals to U.S. water supplies.”
This request-the focus of a lawsuit to be argued in court later in 2019-poses a landmark challenge to the dangerous practice of water fluoridation and has the potential to end one of the most significant chemical assaults on our children's developing bodies and brains.
Read the full article at ChildrensHealthDefense.org.
© 2018 Children's Health Defense, Inc.
This work is reproduced and distributed with the permission of Children's Health Defense, Inc.
Want to learn more from Children's Health Defense? Sign up for free news and updates from Robert F. Kennedy, Jr. and the Children's Health Defense. Your donation will help to support them in their efforts.
<!--//<![CDATA[ var m3_u = (location.protocol=='https:'?'https://network.sophiamedia.com/openx/www/delivery/ajs.php':'http://network.sophiamedia.com/openx/www/delivery/ajs.php'); var m3_r = Math.floor(Math.random()*99999999999); if (!document.MAX_used) document.MAX_used = ','; document.write ("<scr"+"ipt type='text/javascript' src='"+m3_u); document.write ("?zoneid=3&target=_blank"); document.write ('&cb=' + m3_r); if (document.MAX_used != ',') document.write ("&exclude=" + document.MAX_used); document.write (document.charset ? '&charset='+document.charset : (document.characterSet ? '&charset='+document.characterSet : '')); document.write ("&loc=" + escape(window.location)); if (document.referrer) document.write ("&referer=" + escape(document.referrer)); if (document.context) document.write ("&context=" + escape(document.context)); if (document.mmm_fo) document.write ("&mmm_fo=1"); document.write ("'><\/scr"+"ipt>"); //]]>-->
Tumblr media
0 notes
battybat-boss · 5 years
Text
Forced Water Fluoride Poisoning: More People in U.S. Drink Fluoride-adulterated Water Than All Other Countries Combined
Tumblr media Tumblr media
Image source.
U.S. Water Fluoridation: A Forced Experiment that Needs to End
By the Children's Health Defense Team
The United States stands almost entirely alone among developed nations in adding industrial silicofluorides to its drinking water-imposing the community-wide measure without informed consent.
Globally, roughly 5% of the population consumes chemically fluoridated water, but more people in the U.S. drink fluoride-adulterated water than in all other countries combined.
Within the U.S., just under a third (30%) of local water supplies are not fluoridated; these municipalities have either held the practice at bay since fluoridation's inception or have won hard-fought battles to halt water fluoridation.
The fluoride chemicals added to drinking water are unprocessed toxic waste products-captured pollutants from Florida's phosphate fertilizer industry or unregulated chemical imports from China.
The chemicals undergo no purification before being dumped into drinking water and often harbor significant levels of arsenic and other heavy metal contamination; one researcher describes this unavoidable contamination as a
“regulatory blind spot that jeopardizes any safe use of fluoride additives.”
Dozens of studies and reviews-including in top-tier journals such as The Lancet-have shown that fluoride is neurotoxic and lowers children's IQ. Fluoride is also associated with a variety of other health risks in both children and adults.
However, U.S. officialdom persists in making hollow claims that water fluoridation is safe and beneficial, choosing to ignore even its own research!
A multimillion-dollar longitudinal study published in Environmental Health Perspectives in September, 2017, for example, was largely funded by the National Institutes of Health and National Institute of Environmental Health Sciences-and the seminal study revealed a strong relationship between fluoride exposure in pregnant women and lowered cognitive function in offspring.
Considered in the context of other research, the study's implications are, according to the nonprofit Fluoride Action Network, “enormous”-“a cannon shot across the bow of the 80 year old practice of artificial fluoridation.”
A little history
During World War II, fluoride (a compound formed from the chemical element fluorine) came into large-scale production and use as part of the Manhattan Project.
According to declassified government documents summarized by Project Censored, Manhattan Project scientists discovered early on that fluoride was a “leading health hazard to bomb program workers and surrounding communities.”
In order to stave off lawsuits, government scientists:
“embarked on a campaign to calm the social panic about fluoride…by promoting its usefulness in preventing tooth decay.”
To prop up its “exaggerated claims of reduction in tooth decay,” government researchers began carrying out a series of poorly designed and fatally flawed community trials of water fluoridation in a handful of U.S. cities in the mid-1940s.
In a critique decades later, a University of California-Davis statistician characterized these early agenda-driven fluoridation trials as:
“especially rich in fallacies, improper design, invalid use of statistical methods, omissions of contrary data, and just plain muddleheadedness and hebetude.”
As one example, a 15-year trial launched in Grand Rapids, Michigan in 1945 used a nearby city as a non-fluoridated control, but after the control city began fluoridating its own water supply five years into the study, the design switched from a comparison with the non-fluoridated community to a before-and-after assessment of Grand Rapids.
Fluoridation's proponents admitted that this change substantially “compromised” the quality of the study.
In 1950, well before any of the community trials could reach any conclusions about the systemic health effects of long-term fluoride ingestion, the U.S. Public Health Service (USPHS) endorsed water fluoridation as official public health policy, strongly encouraging communities across the country to adopt the unproven measure for dental caries prevention.
Describing this astonishingly non-evidence-based step as “the Great Fluoridation Gamble,” the authors of the 2010 book, The Case Against Fluoride, argue that:
“Not only was safety not demonstrated in anything approaching a comprehensive and scientific study, but also a large number of studies implicating fluoride's impact on both the bones and the thyroid gland were ignored or downplayed” (p. 86).
In 2015, Newsweek magazine not only agreed that the scientific rationale for putting fluoride in drinking water was not as “clear-cut” as once thought but also shared the “shocking” finding of a more recent Cochrane Collaboration review, namely, that there is no evidence to support the use of fluoride in drinking water.
Bad science and powerful politics
The authors of The Case Against Fluoride persuasively argue that “bad science” and “powerful politics” are primary factors explaining why government agencies continue to defend the indefensible practice of water fluoridation, despite abundant evidence that it is unsafe both developmentally and after “a lifetime of exposure to uncontrolled doses.”
Comparable to Robert F. Kennedy, Jr.'s book, Thimerosal: Let the Science Speak, which summarizes studies that the Centers for Disease Control and Prevention (CDC) and “credulous journalists swear don't exist,” The Case Against Fluoride is an extensively referenced tour de force, pulling together hundreds of studies showing evidence of fluoride-related harm.
The research assembled by the book's authors includes studies on fluoride biochemistry; cancer; fluoride's effects on the brain, endocrine system and bones; and dental fluorosis.
With regard to the latter, public health agencies like to define dental fluorosis as a purely cosmetic issue involving “changes in the appearance of tooth enamel,” but the International Academy of Oral Medicine & Toxicology (IAOMT)-a global network of dentists, health professionals and scientists dedicated to science-based biological dentistry-describes the damaged enamel and mottled and brittle teeth that characterize dental fluorosis as “the first visible sign of fluoride toxicity.”
The important 2017 study that showed decrements in IQ following fluoride exposure during pregnancy is far from the only research sounding the alarm about fluoride's adverse developmental effects.
In his 2017 volume, Pregnancy and Fluoride Do Not Mix, John D. MacArthur pulls together hundreds of studies linking fluoride to premature birth and impaired neurological development (93 studies), preelampsia (77 studies) and autism (110 studies).
The book points out that rates of premature birth are “unusually high” in the United States.
At the other end of the lifespan, MacArthur observes that death rates in the ten most fluoridated U.S. states are 5% to 26% higher than in the ten least fluoridated states, with triple the rate of Alzheimer's disease. A 2006 report by the National Research Council warned that exposure to fluoride might increase the risk of developing Alzheimer's.
The word is out
Pregnancy and Fluoride Do Not Mix shows that the Institute of Medicine, National Research Council, Harvard's National Scientific Council on the Developing Child, Environmental Protection Agency (EPA) and National Toxicology Program all are well aware of the substantial evidence of fluoride's developmental neurotoxicity, yet no action has been taken to warn pregnant women.
Instead, scientists with integrity, legal professionals and the public increasingly are taking matters into their own hands. A Citizens Petition submitted in 2016 to the EPA under the Toxic Substances Control Act requested that the EPA “exercise its authority to prohibit the purposeful addition of fluoridation chemicals to U.S. water supplies.”
This request-the focus of a lawsuit to be argued in court later in 2019-poses a landmark challenge to the dangerous practice of water fluoridation and has the potential to end one of the most significant chemical assaults on our children's developing bodies and brains.
Read the full article at ChildrensHealthDefense.org.
© 2018 Children's Health Defense, Inc.
This work is reproduced and distributed with the permission of Children's Health Defense, Inc.
Want to learn more from Children's Health Defense? Sign up for free news and updates from Robert F. Kennedy, Jr. and the Children's Health Defense. Your donation will help to support them in their efforts.
<!--//<![CDATA[ var m3_u = (location.protocol=='https:'?'https://network.sophiamedia.com/openx/www/delivery/ajs.php':'http://network.sophiamedia.com/openx/www/delivery/ajs.php'); var m3_r = Math.floor(Math.random()*99999999999); if (!document.MAX_used) document.MAX_used = ','; document.write ("<scr"+"ipt type='text/javascript' src='"+m3_u); document.write ("?zoneid=3&target=_blank"); document.write ('&cb=' + m3_r); if (document.MAX_used != ',') document.write ("&exclude=" + document.MAX_used); document.write (document.charset ? '&charset='+document.charset : (document.characterSet ? '&charset='+document.characterSet : '')); document.write ("&loc=" + escape(window.location)); if (document.referrer) document.write ("&referer=" + escape(document.referrer)); if (document.context) document.write ("&context=" + escape(document.context)); if (document.mmm_fo) document.write ("&mmm_fo=1"); document.write ("'><\/scr"+"ipt>"); //]]>-->
Tumblr media
0 notes
iasshikshalove · 4 years
Text
DAILY CURRENT AFFAIRS DATED ON 09-OCT 2019
DAILY CURRENT AFFAIRS DATED ON 09-OCT 2019 GS-1 Saturn's moon Why in news? With the discovery of 20 new moons, the ringed planet now has a total of 82 moons against Jupiter's 79. The solar system has a new winner in the moon department. Recent study  Twenty new moons have been found around Saturn, giving the ringed planet a total of 82, scientists said Monday.  That beats Jupiter and its 79 moons.  “It was fun to find that Saturn is the true moon king.  If it’s any consolation to the Jupiter crowd, our solar system’s biggest planet Jupiter still has the biggest moon.  Jupiter’s Ganymede is almost half the size of Earth. By contrast, Saturn’s 20 new moons are minuscule, each barely 5km in diameter.  Astronomers have pretty much completed the inventory of moons as small as 5 kilometers around Saturn and 1.6 kilometers around Jupiter.  It’s harder spotting mini moons around Saturn than Jupiter, Mr. Sheppard said, given how much farther Saturn is. About Saturn's moon  Seventeen of Saturn’s new moons orbit the planet in the opposite, or retrograde, direction. The other three circle in the same direction that Saturn rotates.  They’re so far from Saturn that it takes two to three years to complete a single orbit.  “These moons are the remnants of the objects that helped form the planets, so by studying them, we are learning about what the planets formed from,” Mr. Sheppard wrote. Monsoon in India Why is the monsoon refusing to leave this year?  The answer may lie in a complex set of factors, including a little understood hot-cold condition over the Indian Ocean  The standout feature of this year’s monsoon has been the unusually high rainfall in September.  The month just gone by normally sees 170.2 mm rain over the country as a whole; this year, September saw 259.3 mm of rain, over 52% more than the average.  Also, September 30 is officially the end of India's four-month monsoon season. But this year, the monsoon has refused to go away.  This year, the India Meteorological Department has said, the monsoon might begin to withdraw only after October 10. So why did September get so much rain this year?  It is well into October, and large parts of Bihar, including the capital Patna, are reeling under floods due to massive rainfall events. Was it the La Niña?  La Niña, the phenomenon in the equatorial Pacific Ocean in which the sea surface temperatures turn unusually cold, is known to strengthen rainfall over the Indian sub-continent during the monsoon months.  However, there is no La Niña this year.  In fact, the year started with a weak El Niño, the opposite phenomenon in the Pacific Ocean that has a negative impact on the Indian monsoon, before the situation turned neutral. The other possibility: IOD  Given the fact that there was no La Niña that could possibly explain the massive September rain, scientists have been looking at a similar phenomenon much closer home, called the Indian Ocean Dipole (IOD), which could have contributed to enhanced rainfall.  “There was a cooling of the eastern equatorial Indian Ocean, below Sumatra, and that could have some role to play in the kind of rainfall that we have seen this year.  The IOD is a phenomenon similar to the ENSO condition observed in the Pacific Ocean which creates the El Niño and La Niña events.  The sea surface temperatures in the Indian Ocean get warmer and cooler than normal, and this deviation influences regional atmospheric and weather patterns, notably the Indian monsoon. But there is one major difference between ENSO and IOD.  While the Pacific Ocean only has an El Niño or a La Niña condition at a time, the Indian Ocean experiences both warm and cold conditions at the same time – hence, a dipole.  One of these poles is located in the Arabian Sea, while the other is in the Indian Ocean, south of Indonesia.  The Indian Ocean Dipole is said to be 'positive' when the western pole is warmer than the eastern one, and 'negative' when it is cooler.  Indian Ocean Dipole and ENSO are not unrelated. So, positive IOD events are often associated with El Niño, and negative IOD events with La Niña.  Therefore, when the IOD and ENSO happen at the same time, the Dipole is known to strengthen the impacts of the ENSO condition. Intertropical Convergence Zone  Many scientists like to describe the monsoon in terms of the movement of the Intertropical Convergence Zone, or ITCZ, a region near the Equator where the trade winds of the northern and southern hemispheres come together.  The intense Sun and the warm waters of the ocean heat up the air in this region, and increase its moisture content. As the air rises, it also cools, and releases the accumulated moisture, thus bringing rainfall.  During the monsoon season, this ITCZ is located over the Indian subcontinent. By September, as the temperature begins to go down, the ITCZ starts moving southwards of the Indian landmass, towards the equator, and further into the southern hemisphere.  “In September this year, the northern hemisphere was much warmer than the southern hemisphere, and that could be one reason why the ITCZ has remained longer than usual over the northern hemisphere,. GS-2 Drone based delivery Why in news? The Telangana government has adopted a framework to use drones for last-mile delivery of essential medical supplies such as blood and medical samples in an effort to increase the access to healthcare to communities across the state. About the framework  The framework has been co-designed by the World Economic Forum (WEF) and Apollo Hospitals Group Healthnet Global Limited. In July, Telangana submitted a proposal for its drone policy to the Directorate General of Civil Aviation (DGCA).  The state hopes to become 'beyond visual line of sight' (BVLOS) compliant, making commercial use of drones possible. Why drones?  In its press release, the WEF underlined the core advantage of their use: reduction of the time taken to transport material, and improving supply chain efficiency.  It cited the example of Rwanda, where drone-related pilot projects have been implemented on a national scale to deliver medical supplies without delay and at scheduled intervals.  The project is a part of the WEF’s “Medicine from the Sky” initiative that aims to develop source materials for policymakers and health systems to analyse the challenges that come with drone delivery, and to compare this model with other competing delivery models. Drone regulations  A drone is an aircraft that operates without a pilot on board and is referred to as an Unmanned Aerial Vehicle (UAV).  It has three subsets: Remotely Piloted Aircraft (RPA), Autonomous Aircraft, and Model Aircraft.  An RPA can be further classified into five types on the basis of weight: nano, micro, small, medium and large. RPAs are aircraft that are piloted from remote pilot stations.  In India, the Directorate General of Civil Aviation (DGCA) under the Ministry of Civil Aviation acts as the regulatory body in the field of civil aviation, responsible for regulating air transport and ensuring compliance to civil aviation requirements, air safety, and airworthiness standards.  The DGCA's drone policy requires all owners of RPAs, except drones in the smallest 'nano' category, to seek permission for flights, and comply with regulations including registration, and operating hours (only during the day) and areas (not above designated high security zones). Permission required  There is no blanket permission for flying BVLOS; the visual line of sight being 450 m with a minimum ground visibility of 5 km.  The food delivery platform Zomato has tried out a drone to deliver a payload of up to 5 kg to a distance of 5 km, flying at a maximum speed of 80 km/h; however, regulations do not yet allow the delivery of food by drones.  A change of regulations will be required before largescale use of drones can be made possible for medical or other purposes. A report on malnutrition Context Malnutrition among children in urban India is characterised by relatively poor levels of breastfeeding, higher prevalence of iron and Vitamin D deficiency as well as obesity due to long commute by working mothers, prosperity and lifestyle patterns, while rural parts of the country see higher percentage of children suffering from stunting, underweight and wasting and lower consumption of milk products — these are among the findings of the first-ever national nutrition survey conducted by the government. Salient points of the report  The Comprehensive National Nutrition Survey released by the government on Monday shows that 83% of children between 12 and 15 months continued to be breastfed, a higher proportion of children in this age group residing in rural areas are breastfed (85%) compared to children in urban areas (76%).  Breastfeeding is inversely proportional to household wealth and other factors influencing this trend may include working mothers who have to travel long distances to reach their workplace.  Because of these reasons, it also noted that rural children receive meals more frequently in a day at 44% as compared to 37% of urban children.  However, a higher proportion of children residing in urban areas (26.9%) are fed an adequately diverse diet as compared to those in rural areas (19%).  Children and adolescents residing in urban areas also have a higher (40.6%) prevalence of iron deficiency compared to their rural counterparts (29%), which experts say is due to a better performance of the government’s health programmes in rural areas.  Rural areas also witness higher prevalence of stunting (37% in rural versus 27% in urban), underweight (36% in rural versus 26% in urban) and severe acute malnutrition (34.7% in rural areas for children in 5-9 years versus 23.7% in urban areas and 27.4% in urban areas for adolescents in 10-19 years versus 32.4% in rural areas). GS-3 Graded Response Action Plan (GRAP) Starting October 15, some stricter measures to fight air pollution will come into force in Delhi’s neighbourhood, as part of the Graded Response Action Plan (GRAP). The action plan has already been in effect for two years in Delhi and the National Capital Region (NCR) New changes  What is new in the recent announcement is that measures aimed at stopping the use of diesel generator sets will, from next week, extend beyond Delhi to the NCR, where many areas see regular power cuts.  The measures that are coming into force will be incremental. As pollution rises, and it is expected to as winter approaches, more measures will come into play depending on the air quality.  All these measures are part of GRAP, which was formulated in 2016 and notified in 2017. Experts working in the field of air pollution have credited this list of measures with causing the dip in Delhi’s air pollution over the past few years. What is GRAP?  Approved by the Supreme Court in 2016, the plan was formulated after several meetings that the Environment Pollution (Prevention and Control) Authority (EPCA) held with state government representatives and experts.  The result was a plan that institutionalised measures to be taken when air quality deteriorates.  GRAP works only as an emergency measure.  As such, the plan does not include action by various state governments to be taken throughout the year to tackle industrial, vehicular and combustion emissions.  When the air quality shifts from poor to very poor, the measures listed under both sections have to be followed since the plan is incremental in nature.  If air quality reaches the severe+ stage, GRAP talks about shutting down schools and implementing the odd-even road-space rationing scheme. Success of GRAP  GRAP has been successful in doing two things that had not been done before — creating a stepby-step plan for the entire Delhi-NCR region and getting on board several agencies: all pollution control boards, industrial area authorities, municipal corporations, regional officials of the India Meteorological Department, and others.  The plan requires action and coordination among 13 different agencies in Delhi, Uttar Pradesh, Haryana and Rajasthan (NCR areas).  At the head of the table is the EPCA, mandated by the Supreme Court.  GRAP was notified in 2017 by the Centre and draws its authority from this notification. B  Before the imposition of any measures, EPCA holds a meeting with representatives from all NCR states, and a call is taken on which actions has to be made applicable in which town.  Last year, the ban on using diesel generator sets was implemented only in Delhi. This year, it is being extended to a few NCR towns. Rural areas are, however, being left out of this stringent measure because of unreliable power supply. Has GRAP helped?  The biggest success of GRAP has been in fixing accountability and deadlines.  For each action to be taken under a particular air quality category, executing agencies are clearly marked.  In a territory like Delhi, where a multiplicity of authorities has been a long-standing impediment to effective governance, this step made a crucial difference.  Also, coordination among as many as 13 agencies from four states is simplified to a degree because of the clear demarcation of responsibilities.  Three major policy decisions that can be credited to EPCA and GRAP are the closure of the thermal power plant at Badarpur, bringing BS-VI fuel to Delhi before the deadline set initially, and the ban on Pet coke as a fuel in Delhi NCR.  The body continues to monitor pollution and assists the Supreme Court in several pollutionrelated matters. What measures have been taken in other states?  One criticism of the EPCA as well as GRAP has been the focus on Delhi.  While other states have managed to delay several measures, citing lack of resources, Delhi has always been the first one to have stringent measures enforced.  In a recent meeting that discussed the ban on diesel generator sets, the point about Delhi doing all the heavy lifting was also raised.  In 2014, when a study by the World Health Organization found that Delhi was the most polluted city in the world, panic spread in the Centre and the state government.  The release of a study on sources of air pollution the following year also gave experts, NGOs and scientists a handle on why Delhi was so polluted.  All of these things, state government officials say, have made Delhi the obvious pilot project.  For GRAP as well as EPCA, the next challenge is to extend the measures to other states effectively. Nobel for physics This year's Nobel Prize for Physics, announced on Tuesday, recognises research that helps us understand our place in the universe. Canadian-American cosmologist James Peebles, 84, won one-half of the Prize for his theoretical work helping us understand how the universe evolved after the Big Bang. The other half went to Swiss astronomers Michel Mayor, 77, and Didier Queloz, 53, for their discovery of an exoplanet that challenged preconceived ideas about planets. How the universe evolved  Modern cosmology assumes that the universe formed as a result of the Big Bang.  In decades of work since the 1960s, Peebles used theoretical physics and calculations to interpret what happened after.  His work is focused largely on Cosmic Microwave Background (CMB) radiation, which is electromagnetic radiation left over from the early universe once it had cooled sufficiently following the Big Bang.  Today, CMB can be observed with detectors.  When it was observed for the first time in 1964 by radio astronomers Arnold Penzias and Robert Wilson —who would go non to be awarded the 1978 Physics Nobel — they were initially puzzled. They learnt later that Peebles had predicted such radiation. Peebles and colleagues have correlated the temperature of this radiation with the amount of matter created in the Big Bang, which was a key step towards understanding how this matter would later form the galaxies and galaxy clusters. From their work derives our knowledge of how mysterious the universe is — just 5% known matter and the rest unknown, as dark matter (26%) and dark energy (69%). Exoplanets  The hunt for extraterrestrial life, if any exists, depends on finding habitable planets, mainly outside our Solar System.  Today, exoplanets are being discovered very frequently — over 4,000 are known — which is remarkable progress from three decades ago, when not even one exoplanet was known.  The first confirmed discoveries came in 1992, but these were orbiting not a star but the remains of one. About the planet  The planet discovered by Mayor and Queloz in 1995 is 50 light years away, orbiting the star 51 Pegasus that is similar to our Sun.  Called 51 Pegasus b, the exoplanet is not habitable either, but it challenged our understanding of planets and laid the foundation for future discoveries.  Using a spectrograph, ELODIE, built by Mayor and collaborators and installed at the HauteProvence Observatory in France, they predicted the planet by observing the “Doppler effect” — when the star wobbles as an effect of a planet’s gravity on its observed light.  It is a gas giant comparable to Jupiter, yet it very hot, unlike icy cold Jupiter; 51 Pegagsus b is even closer to its star than Mercury is to our Sun.  Until then, gas giants were presumed to be cold, formed a great distance from their stars.  Today, it is accepted that these hot gas giants represent what Jupiter would look like if it were suddenly transported closer to the Sun.  The discovery of the planet “started a revolution in astronomy”, as described in the official Nobel Prize website.  “Strange new worlds are still being discovered... forcing scientists to revise their theories of the physical processes behind the origins of planets. Chandrayaan-2 Why in news? LAST WEEK, the Indian Space Research Organisation (ISRO) tweeted that an instrument on Chandrayaan2, CLASS, designed to detect signatures of elements in the Moon’s soil, had detected charged particles during the mission. This happened in September, during the orbiter’s passage through the “geotail”. Details  The geotail is a region in space that allows the best observations.  The region exists as a result of the interactions between the Sun and Earth. On its website, ISRO explains how the region is formed, and how it helps scientific observations.  The Sun emits the solar wind, which is a continuous stream of charged particles.  These particles are embedded in the extended magnetic field of the Sun. Since the Earth has a magnetic field, it obstructs the solar wind plasma.  This interaction results in the formation of a magnetic envelope around Earth (see illustration).  On the Earth side facing the Sun, the envelope is compressed into a region that is approximately three to four times the Earth radius.  On the opposite side, the envelope is stretched into a long tail, which extends beyond the orbit of the Moon. It is this tail that is called the geotail.  Once every 29 days, the Moon traverses the geotail for about six days. When Chandrayaan-2, which is orbiting the Moon, crosses the geotail, its instruments can study the properties of the geotail, ISRO said. For the CLASS instrument seeking to detect element signatures, the lunar soil can be best observed when a solar flare provides a rich source of X-rays to illuminate the surface. Secondary X-ray emission resulting from this can be detected by CLASS to directly detect the presence of key elements like Na, Ca, Al, Si, Ti and Fe, ISRO said. Fossil discovery Why in news? Two fossils dating back 25 million years were found in Makum coalfield in Assam With over 49,000 plant species reported as of 2018, India holds about 11.5% of all flora in the world. Now, a new fossil record has shown that India is the birthplace of Asian bamboo, and they were formed about 25 million years ago in the north-eastern part of the country. Ancient fossils  An international team of researchers found two fossil compressions or impressions of bamboo culms (stems) and after further study noted them to be new species.  They were named Bambusiculmus tirapensis and B. makumensis - as they were found in the Tirap mine of Makum Coalfield in Assam.  These belonged to the late Oligocene period of about 25 million years ago.  They also found two impressions of bamboo leaves belonging to new species Bambusium deomarense, and B. arunachalense, named after the Doimara region of Arunachal Pradesh where it was discovered.  These leaves were found in the late Miocene to Pliocene sediments, indicating that they were between 11 and three million years old.  Yunnan Province in China now has the highest diversity of bamboo, but the oldest fossil in that region is less than 20 million years old, clearly indicating that Asian bamboo was born in India and then migrated there. V  This finding further strengthens the theory that bamboo came to Asia from India and not from Europe. Role of plate tectonics  In fact, the European bamboo fossil is about 50 million years old. Dr Srivastava explains that the Indian plate collided with the Eurasian plate about 50 million years ago.  However, the suturing between the two plates were not completed until 23 million years, meaning the plates were not completely joined, restricting migration of plants and animals.  And also as the Himalayas were not formed yet, the temperature was also warm and humid in the Northeastern region, with not many seasonal variations.  The present climate in the region is cold with strong winter and summer conditions.  Bamboo braved these climatic and geographical changes making it the fittest in the survival race. This study has shown that India is a treasure trove of plant fossils and more importance needs to be given to its study.
0 notes
rigelmejo · 1 year
Text
I’ve been thinking about this a lot lately. You know how they say, it’s easiest to learn from stuff that’s 98% comprehensible to you (like 2 unknown words out of 100)? There’s also the lower estimate of 95% comprehensible, sometimes. The idea being that if you comprehend that much of what you read, then you can just read extensively in the language you’re reading and pick up words fairly easily from context. Just like how you learn words in your native language by reading for pleasure. (And this ‘read 98% comprehensible stuff’ suggestion is ALSO often used for how to get native speakers to improve in reading in their native languages, improve vocabulary and literacy in their native languages).
Well I dug into that idea before, wondering if there’s a lower threshold at which one can simply read extensively (as in without looking anything up) and still pick up new words from context. Personally, in my own experience, I have success learning from context alone when I at MINIMUM comprehend the ‘main overall idea’ of a text without word lookup. If I look up words, then I can learn new words a bit below that threshold. But if I look up no words, I need to at least already be able to grasp the main idea of what is being communicated, which will provide context to guess the meaning of some of the unknown words. I know that level of ‘comprehending main idea but possibly not much more’ is well below 98% and 95% comprehension. Because I’ve measured my comprehension of a text a year after first reading it for main idea, and I only got to 95% comprehension (defined here for my sake as 95/100 words I knew the definition of per text section) after a year. But I could comprehend enough to follow the main idea and some details fairly fine a year prior to reaching 95% comprehension. So there is a lower amount of comprehension at which a person has enough context to figure out the meaning of Some New Words from context only.
I saw one study mention 90% comprehension (defined here as knowing the definitions of 90/100 words per text section) as being enough to learn new words from extensive reading. The study mentioned that for some people 80-90% comprehension is enough to learn new words from context alone (extensive reading), provided the reader has a decent tolerance for ambiguity. Tolerance for ambiguity is how much you the reader can tolerate trying to figure out the meaning of something when there’s ambiguous parts you cannot understand. Think about being 5 and maybe just having learned to read Dr. Seuss, then imagine you picked up The Hunger Games at age 5 - do you think you would have had the tolerance to try and read it? Imagine you picked up Stephen King at age 5, do you think you could have tolerated trying to sound out words and read many words you’d never heard/studied and try to understand it? Okay, picture that you simply picked up Magic Treehouse, something written for a 6 year old - you might have been able to get yourself to read through it and figure some words out. Or you pick up Bunnicula, or Catwings, aimed at maybe 7-8 year olds, and tried to read it. There is a point where you personally could probably have started tolerating the ambiguity and managed to read the book written for readers who knew more words than you. I remember when I was reading at about a 7 year old level, I’d sometimes try to read James Mitchener and Stephen Hawkings because they were on my dad’s bookshelf. They were much too hard so I gave up after a few pages. My dad tried to get me into reading with Harry Potter, so made for 11 year olds, and that worked a bit better and pushed my reading level up to that ultimately (while still around 7-8 years old). After that, then I could go back to stuff like Sherlock Holmes and James Mitchener on my dad’s shelf and tolerate reading it, even with the level of ambiguity of words I still didn’t know. Some people are going to have a higher ambiguity tolerance than me, and be fine reading material WAY ABOVE their vocabulary level quicker. And some people will have a much lower tolerance than me, and want to read only slightly challenging stuff or even perhaps only stuff nearly at their current reading level (around 98% comprehensible). As adults, we have the benefit of having the patience to use word lookups if we read something ‘much higher reading level’ than we are currently at, so we could even chug through something we only knew 30% of the words for, if we had a dictionary (or click translation tool) on hand. So everyone’s tolerance for ambiguity in reading will vary, especially if we are willing to use word lookup. 
Without using word lookup, doing ONLY extensive reading, people generally seem to at minimum need 80-90% comprehension of the words in a given text to tolerate the ambiguity while still understanding enough of the main idea to use it as context to guess some of the unknown words. Within that, a lot of people are going to find 80-90% comprehension frustratingly ambiguous and won’t tolerate reading it, and will prefer 90-98% comprehended reading materials, or 95-98% comprehended. Most people can easily tolerate 98% comprehensible materials, so most graded readers are aiming for this. Once you get into reading novels made for native speakers though, it can be difficult to find reading materials you will actually know 98% of the words within. Knowing 80% of the words isn’t too impossible a task, if you’re learning 2000 common words then eventually those words learned will cover around 80% of the text you see. From 80-98% comprehension is where it will be most difficult to find reading materials. Some stuff will be mostly words you know, other stuff you will really only know 80% of the words, you’ll need to learn a lot of new words to get your comprehension up into the 90s%. Your tolerance for ambiguity will determine how much you’ll want to rely on word lookups before you veer into reading extensively to pick up more vocabulary.
And finally, why I made this post. I’ve been thinking about how some articles will show you what 98% comprehension of a text looks like. It will show you a text with all words in english then 2 out of 100 will be gibberish. The articles might also show 90% comprehension by showing 10 gibberish words and 90 english ones, to really hammer home how ambiguous 90% comprehension of a text can be.
But I’d like to argue, when reading in another language, ‘90% comprehension’ isn’t quite that brutal. In a real language, there’s word roots, stems, endings, compound words, grammar conjugations and patterns. These are all HINTS of what unknown word means. If you saw ‘con-national’ (a made up english word, gibberish in this example) I bet you could make a guess about what it means in the context of a sentence. If you saw ‘shesh-ly’ I bet you could guess its some kind of adjective/descriptor word. I bet if you saw ‘maidenfrysk’ you would guess maybe its a compound word that has to do with maidens/women and maybe you’d try to guess frysk in context or just decide its some job/position/noun like maiden or nursemaid would be. Even if you only know 90/100 words in a foreign language text, you may have some hints in the unknown 10 words about what they mean. The grammar you know of the language would help for identifying word endings and word roots, identifying compound words and the portions of the word you DO already know. You aren’t actually going to see an unknown word like ‘djdkddl’ as much as you’d see an unknown word like ‘sek-flower’ and ‘dsk-ceivable’ where you have some clues within the word and around it with grammar to help guess what it means. In chinese, you have components within each individual hanzi to at least vaguely give you something about the word, along with surrounding grammar and any other words next to it potentially making it a compound word and contributing potentially the meaning you ALREADY know of the portion of the compound word that’s familiar. So yes, reading with 90% comprehension is difficult in the sense you only know for certain 90/100 word definitions in a text. But of the 10 unknown words, you often have some partial meaning you can figure out, which makes the actual amount you might understand a bit more than 90%. At least that’s what I think...
That said, the studies on word comprehension were often done of native speakers who knew the definitions of 98/100 words in a text (etc) so they did have context and partial-word information for the unknown words. So my commentary above is more about how in foreign languages, the examples that depict the unknown words as total gibberish are making % comprehended seem a bit harder to grasp the meaning of than it actually is. 
2 notes · View notes
Text
The problem with TEF – a look at the technical failings
Professor DV Bishop outlines the multiple flaws in the TEF methodology
  In a previous post I questioned the rationale and validity of the Teaching Excellence and Student Outcomes Framework (TEF). Here I document the technical and statistical problems with TEF.
  How are statistics used in TEF?
Two types of data are combined in TEF: a set of ‘contextual’ variables, including student backgrounds, subject of study, level of disadvantage, etc., and a set of ‘quality indicators’ as follows:
Student satisfactionas measured by responses to a subset of items from the National Student Survey (NSS)
Continuation – the proportion of students who continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA)
Employment outcomes- what students do after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey (DLHE)
As detailed further below, data on the institution’s quality indicators is compared with the ‘expected value’ that is computed based on the contextual data of the institution. Discrepancies between obtained and expected values, either positive or negative, are flagged and used, together with a written narrative from the institution, to rate each institution as Gold, Silver or Bronze. This beginner’s guide provides more information.
  Problem 1: Lack of transparency and reproducibility
When you visit the DfE’s website, the first impression is that it is a model of transparency. On this site, you can download tables of data and even consult interactive workbooks that allow you to see the relevant statistics for a given provider. Track through the maze of links and you can also find an 87-page technical document of astounding complexity that specifies the algorithms used to derive the indicators from the underlying student data, DLHE survey and NSS data.
The problem, however, is that nowhere can you find a script that documents the process of deriving the final set of indicators from the raw data: if you try to work this out from first principles by following the HESA guidance on benchmarking, you run into the sand, because the institutional data is not provided in the right format.  When I asked the TEF metrics team about this, I was told: “The full process from the raw data in HESA/ILR returns, NSS etc. cannot be made fully open due to data protection issues, as there is sensitive student information involved in the process.” But this seems disingenuous. I can see that student data files are confidential, but once this information has been extracted and aggregated at institutional level, it should be possible to share it. If that isn’t feasible, then the metrics team should be able to at least generate some dummy data sets, with scripts that would do the computations that convert the raw metrics into the flags that are used in TEF rankings.
As someone interested in reproducibility in science, I’m all too well aware of the problems that can ensue if the pipeline from raw data to results is not clearly documented – this short piece by Florian Markowetz makes the case nicely.  In science and beyond, there are some classic scare stories of what can happen when the analysis relies on spreadsheets: there’s even a European Spreadsheet Risks Interest Group. There will always be errors in data – and sometimes also in the analysis scripts: the best way to find and eradicate them is to make everything open.
  Problem 2: The logic of benchmarking
The idea of benchmarking is to avoid penalising institutions that take on students from disadvantaged backgrounds:
“Through benchmarking, the TEF metrics take into account the entry qualifications and characteristics of students, and the subjects studied, at each university or college. These can be very different and TEF assessment is based on what each college or university achieves for its particular students within this context. The metrics are also considered alongside further contextual data, about student characteristics at the provider as well as the provider’s location and provision.”
One danger of benchmarking is that it risks entrenching disadvantage. Suppose we have institutions X and Y, which are polar opposites in terms of how well they treat students. X is only interested in getting student fees, does not teach properly, and does not care about drop-outs – we hope such cases are rare, but, as this Panorama exposé showed, they do exist, and we’d hope that TEF would expose them. Y, by contrast, fosters its students and does everything possible to ensure they complete their course.  Let us further suppose that X offers a limited range of vocational courses, whereas Y offers a wider range of academic subjects, and that X has a higher proportion of disadvantaged students. Benchmarking ensures that X will be evaluated relative to other institutions offering similar courses to a similar population. This can lead to a situation where, because poor outcomes at X are correlated with its subject and student profile, expectations are low, and poor scores for student satisfaction and completion rates are not penalised.
Benchmarking is well-intentioned – its aim is to give institutions a chance to shine even if they are working with students who may struggle to learn. However, it runs the risk of making low expectations acceptable. It could be argued that, while there are characteristics of students and courses that affect student outcomes, in general, higher education institutions should not be offering courses where there is a high probability of student drop-out. And students would find it more helpful to see raw data on drop-out rates and student satisfaction, than to merely be told that an institution is Bronze, Silver or Gold – a rating that can only be understood in relative terms.
  Problem 3: The statistics of benchmarking
The method used to do benchmarking comes from Draper and Gittoes (2005), and is explained here. A more comprehensive statistical treatment and critique can be found here.  Essentially, you identify background variables that predict outcomes, assess typical outcomes associated with each combination of these in the whole population under consideration, and then calculate an ‘expected’ score, as a mean of these combinations, weighted by the frequency of each combination at the institution.
The obtained score may be higher or lower than the ‘expected’ value. The question is how you interpret such differences, bearing in mind that some variation is expected just due to random fluctuations. The precision of the estimate of both observed and expected values will increase as the sample size increases: you can compute a standard error around the difference score, and then use statistical criteria to identify cases with difference scores that are likely to be meaningful and not just down to random noise. However, where there is a small number of students, it is hard to distinguish a genuine effect from noise, but where there is a very large number, even tiny differences will be significant. The process used in benchmarking uses statistical criteria to assign ‘flags’ to indicate scores that are extremely good (++), or good (+), or extremely bad (–) or bad (-) in relation to expectation. To ameliorate the problem of tiny effects being flagged in large samples, departures from expectation are flagged only if they exceed a specific number of percentage points.
This is illustrated for the case of one of the NSS measurements in Figure 1, which shows that the problem of sample size has not been solved: a large institution is far more likely to get a flagged score (either positive or negative) than a small one. Indeed, a small institution is a pretty safe bet for a silver award.
Figure 1. The Indicator (x-axis) is the percentage of students with positive NSS ratings, and the z-score (y-axis) shows how far this value is from expectation based on benchmarks. The plot illustrates several things: (a) the range of indicators becomes narrower as sample size increases; (b) most scores are bunched around 85%; (c) for large institutions, even small changes in indicators can make a big difference to flags, whereas for small institutions, most are unflagged, regardless of the level of indicator; (d) the number of extreme flags (filled circles or asterisks) is far greater for large than small institutions.
  Problem 4: Benchmarking won’t work at subject level
From a student perspective, it is crucial to have information about specific courses; institution-wide evaluation is not much use to anyone other than vice-chancellors who wish to brag about their rating. However, the problems I have outlined with small samples are amplified if we move to subject-level evaluation.  I raised this issue with the TEF metrics team, and was told:
‘The issue of smaller student numbers ‘defaulting’ to silver is something we are aware of. Paragraph 94 on page 29 of the report on findings from the first subject pilot mentions some OfS analysis on this. The Government consultation response also has a section on this. On page 40, the government response to question 10 refers to assessability, and potential methods that could be used to deal with this in future runs of the TEF.’
So the OfS knows they have a problem, but seems determined to press on, rather than rethinking the exercise.
  Problem 5: You’ll never be good enough
The benchmarks used in TEF are based on identifying statistical outliers. Forget for a moment the sample size issue, and suppose we have a set of institutions with broadly the same large number of students, and a spread of scores on a metric, such that the mean percentage meeting criterion is 80%, with a standard deviation of 2% (see Figure 2). We flag the bottom 10% (those with scores below 77.5%) as problematic. In the next iteration of the exercise, those with low scores have either gone out of business, improved their performance, or learned how to game the metric, and so we no longer have anyone scoring below 77.5%. The mean score thus increases and the standard error decreases. So now, on statistical grounds, a score below 78.1% gets flagged as problematic. In short, with a statistical criterion for poor performance, even if everyone improves dramatically, or poor-performers drop out, there will still be those at the bottom of the distribution – unless we get to a point where there is no meaningful variation in scores.
  Figure 2: Simulated data showing how improvements in scores can lead to increasing cutoff in the next round if statistical criterion is adopted.
  The bottom line
TEF may be summarised thus:
Take a heterogenous mix of variables, all of them proxy indicators for ‘teaching excellence’, which vary hugely in their reliability, sensitivity and availability
Transform them into difference scores by comparing them with ‘expected’ scores derived from a questionable benchmarking process
Convert difference scores to ‘flags’, whose reliability varies with the size of the institution
Interpret these in the light of qualitative information provided by institutions
All to end up with a three-point ordinal scale, which does not provide students with the information that they need to select a course.
Time, maybe, to ditch the TEF and encourage students to consult the raw data instead to find out about courses?
from RSSMix.com Mix ID 8239600 http://cdbu.org.uk/the-problem-with-tef-a-look-at-the-technical-failings/ via IFTTT
0 notes
The problem with TEF – a look at the technical failings
Professor DV Bishop outlines the multiple flaws in the TEF methodology
  In a previous post I questioned the rationale and validity of the Teaching Excellence and Student Outcomes Framework (TEF). Here I document the technical and statistical problems with TEF.
  How are statistics used in TEF?
Two types of data are combined in TEF: a set of ‘contextual’ variables, including student backgrounds, subject of study, level of disadvantage, etc., and a set of ‘quality indicators’ as follows:
Student satisfactionas measured by responses to a subset of items from the National Student Survey (NSS)
Continuation – the proportion of students who continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA)
Employment outcomes- what students do after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey (DLHE)
As detailed further below, data on the institution’s quality indicators is compared with the ‘expected value’ that is computed based on the contextual data of the institution. Discrepancies between obtained and expected values, either positive or negative, are flagged and used, together with a written narrative from the institution, to rate each institution as Gold, Silver or Bronze. This beginner’s guide provides more information.
  Problem 1: Lack of transparency and reproducibility
When you visit the DfE’s website, the first impression is that it is a model of transparency. On this site, you can download tables of data and even consult interactive workbooks that allow you to see the relevant statistics for a given provider. Track through the maze of links and you can also find an 87-page technical document of astounding complexity that specifies the algorithms used to derive the indicators from the underlying student data, DLHE survey and NSS data.
The problem, however, is that nowhere can you find a script that documents the process of deriving the final set of indicators from the raw data: if you try to work this out from first principles by following the HESA guidance on benchmarking, you run into the sand, because the institutional data is not provided in the right format.  When I asked the TEF metrics team about this, I was told: “The full process from the raw data in HESA/ILR returns, NSS etc. cannot be made fully open due to data protection issues, as there is sensitive student information involved in the process.” But this seems disingenuous. I can see that student data files are confidential, but once this information has been extracted and aggregated at institutional level, it should be possible to share it. If that isn’t feasible, then the metrics team should be able to at least generate some dummy data sets, with scripts that would do the computations that convert the raw metrics into the flags that are used in TEF rankings.
As someone interested in reproducibility in science, I’m all too well aware of the problems that can ensue if the pipeline from raw data to results is not clearly documented – this short piece by Florian Markowetz makes the case nicely.  In science and beyond, there are some classic scare stories of what can happen when the analysis relies on spreadsheets: there’s even a European Spreadsheet Risks Interest Group. There will always be errors in data – and sometimes also in the analysis scripts: the best way to find and eradicate them is to make everything open.
  Problem 2: The logic of benchmarking
The idea of benchmarking is to avoid penalising institutions that take on students from disadvantaged backgrounds:
“Through benchmarking, the TEF metrics take into account the entry qualifications and characteristics of students, and the subjects studied, at each university or college. These can be very different and TEF assessment is based on what each college or university achieves for its particular students within this context. The metrics are also considered alongside further contextual data, about student characteristics at the provider as well as the provider’s location and provision.”
One danger of benchmarking is that it risks entrenching disadvantage. Suppose we have institutions X and Y, which are polar opposites in terms of how well they treat students. X is only interested in getting student fees, does not teach properly, and does not care about drop-outs – we hope such cases are rare, but, as this Panorama exposé showed, they do exist, and we’d hope that TEF would expose them. Y, by contrast, fosters its students and does everything possible to ensure they complete their course.  Let us further suppose that X offers a limited range of vocational courses, whereas Y offers a wider range of academic subjects, and that X has a higher proportion of disadvantaged students. Benchmarking ensures that X will be evaluated relative to other institutions offering similar courses to a similar population. This can lead to a situation where, because poor outcomes at X are correlated with its subject and student profile, expectations are low, and poor scores for student satisfaction and completion rates are not penalised.
Benchmarking is well-intentioned – its aim is to give institutions a chance to shine even if they are working with students who may struggle to learn. However, it runs the risk of making low expectations acceptable. It could be argued that, while there are characteristics of students and courses that affect student outcomes, in general, higher education institutions should not be offering courses where there is a high probability of student drop-out. And students would find it more helpful to see raw data on drop-out rates and student satisfaction, than to merely be told that an institution is Bronze, Silver or Gold – a rating that can only be understood in relative terms.
  Problem 3: The statistics of benchmarking
The method used to do benchmarking comes from Draper and Gittoes (2005), and is explained here. A more comprehensive statistical treatment and critique can be found here.  Essentially, you identify background variables that predict outcomes, assess typical outcomes associated with each combination of these in the whole population under consideration, and then calculate an ‘expected’ score, as a mean of these combinations, weighted by the frequency of each combination at the institution.
The obtained score may be higher or lower than the ‘expected’ value. The question is how you interpret such differences, bearing in mind that some variation is expected just due to random fluctuations. The precision of the estimate of both observed and expected values will increase as the sample size increases: you can compute a standard error around the difference score, and then use statistical criteria to identify cases with difference scores that are likely to be meaningful and not just down to random noise. However, where there is a small number of students, it is hard to distinguish a genuine effect from noise, but where there is a very large number, even tiny differences will be significant. The process used in benchmarking uses statistical criteria to assign ‘flags’ to indicate scores that are extremely good (++), or good (+), or extremely bad (–) or bad (-) in relation to expectation. To ameliorate the problem of tiny effects being flagged in large samples, departures from expectation are flagged only if they exceed a specific number of percentage points.
This is illustrated for the case of one of the NSS measurements in Figure 1, which shows that the problem of sample size has not been solved: a large institution is far more likely to get a flagged score (either positive or negative) than a small one. Indeed, a small institution is a pretty safe bet for a silver award.
Figure 1. The Indicator (x-axis) is the percentage of students with positive NSS ratings, and the z-score (y-axis) shows how far this value is from expectation based on benchmarks. The plot illustrates several things: (a) the range of indicators becomes narrower as sample size increases; (b) most scores are bunched around 85%; (c) for large institutions, even small changes in indicators can make a big difference to flags, whereas for small institutions, most are unflagged, regardless of the level of indicator; (d) the number of extreme flags (filled circles or asterisks) is far greater for large than small institutions.
  Problem 4: Benchmarking won’t work at subject level
From a student perspective, it is crucial to have information about specific courses; institution-wide evaluation is not much use to anyone other than vice-chancellors who wish to brag about their rating. However, the problems I have outlined with small samples are amplified if we move to subject-level evaluation.  I raised this issue with the TEF metrics team, and was told:
‘The issue of smaller student numbers ‘defaulting’ to silver is something we are aware of. Paragraph 94 on page 29 of the report on findings from the first subject pilot mentions some OfS analysis on this. The Government consultation response also has a section on this. On page 40, the government response to question 10 refers to assessability, and potential methods that could be used to deal with this in future runs of the TEF.’
So the OfS knows they have a problem, but seems determined to press on, rather than rethinking the exercise.
  Problem 5: You’ll never be good enough
The benchmarks used in TEF are based on identifying statistical outliers. Forget for a moment the sample size issue, and suppose we have a set of institutions with broadly the same large number of students, and a spread of scores on a metric, such that the mean percentage meeting criterion is 80%, with a standard deviation of 2% (see Figure 2). We flag the bottom 10% (those with scores below 77.5%) as problematic. In the next iteration of the exercise, those with low scores have either gone out of business, improved their performance, or learned how to game the metric, and so we no longer have anyone scoring below 77.5%. The mean score thus increases and the standard error decreases. So now, on statistical grounds, a score below 78.1% gets flagged as problematic. In short, with a statistical criterion for poor performance, even if everyone improves dramatically, or poor-performers drop out, there will still be those at the bottom of the distribution – unless we get to a point where there is no meaningful variation in scores.
  Figure 2: Simulated data showing how improvements in scores can lead to increasing cutoff in the next round if statistical criterion is adopted.
  The bottom line
TEF may be summarised thus:
Take a heterogenous mix of variables, all of them proxy indicators for ‘teaching excellence’, which vary hugely in their reliability, sensitivity and availability
Transform them into difference scores by comparing them with ‘expected’ scores derived from a questionable benchmarking process
Convert difference scores to ‘flags’, whose reliability varies with the size of the institution
Interpret these in the light of qualitative information provided by institutions
All to end up with a three-point ordinal scale, which does not provide students with the information that they need to select a course.
Time, maybe, to ditch the TEF and encourage students to consult the raw data instead to find out about courses?
from CDBU http://cdbu.org.uk/the-problem-with-tef-a-look-at-the-technical-failings/ via IFTTT
0 notes
joannlyfgnch · 6 years
Text
No More FAQs: Create Purposeful Information for a More Effective User Experience
It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.
As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.
The problem with FAQs
Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!
Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.
For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:
Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).
Purposeful information
Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.
Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.
By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.
Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.
The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.
The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.
If you have to include FAQs
This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.
Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.
Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.
Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.
In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.
Creating a purposeful FAQ:
Make it easy to find.
Have a clear purpose and highly specific content in mind.
Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
Use clear, concise wording for questions.
Focus questions on user goals and tasks, not on product or brand.
Keep it short.
What to avoid in any FAQ:
Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.
In the end, always remember your users
Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.
http://ift.tt/2ASiYze
0 notes
aaronbarrnna · 6 years
Text
No More FAQs: Create Purposeful Information for a More Effective User Experience
It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.
As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.
The problem with FAQs
Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!
Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.
For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:
Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).
Purposeful information
Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.
Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.
By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.
Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.
The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.
The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.
If you have to include FAQs
This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.
Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.
Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.
Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.
In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.
Creating a purposeful FAQ:
Make it easy to find.
Have a clear purpose and highly specific content in mind.
Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
Use clear, concise wording for questions.
Focus questions on user goals and tasks, not on product or brand.
Keep it short.
What to avoid in any FAQ:
Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.
In the end, always remember your users
Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.
http://ift.tt/2ASiYze
0 notes
suzanneshannon · 5 years
Text
Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing
As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.
I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net.
That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means:
hanselman.com/blog/foo -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/bar -> hanselmanblog.azurewebsites.net/foo
hanselman.com/blog/foo/bar/baz -> hanselmanblog.azurewebsites.net/foo/bar/baz
There's a few things to consider when dealing with reverse proxies like this and I've written about that in detail in this article on Dealing with Application Base URLs and Razor link generation while hosting ASP.NET web apps behind Reverse Proxies.
You can and should read in detail about Azure Front Door here.
It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing.
On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally.
Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future.
CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app.
Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes.
After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so.
There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.
app.UseHealthChecks("/healthcheck");
Next I turned the Interval WAY app so it wouldn't bug me every few seconds.
These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging."
After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost.
I haven't decided if I want to set up Front Door's caching or not, but it might mean an easier, more central way than using a CDN manually and changing the URLs for my sites static content and images. In fact, the POP (Point of Presense) locations for Front Door are the same as those for Azure CDN.
NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service.
There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing.
I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments.
Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      Real World Cloud Migrations: Azure Front Door for global HTTP and path based load-balancing published first on https://deskbysnafu.tumblr.com/
0 notes
dustinwootenne · 6 years
Text
No More FAQs: Create Purposeful Information for a More Effective User Experience
It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.
As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.
The problem with FAQs
Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!
Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.
For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:
Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).
Purposeful information
Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.
Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.
By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.
Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.
The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.
The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.
If you have to include FAQs
This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.
Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.
Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.
Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.
In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.
Creating a purposeful FAQ:
Make it easy to find.
Have a clear purpose and highly specific content in mind.
Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
Use clear, concise wording for questions.
Focus questions on user goals and tasks, not on product or brand.
Keep it short.
What to avoid in any FAQ:
Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.
In the end, always remember your users
Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.
http://ift.tt/2ASiYze
0 notes
lopezdorothy70-blog · 5 years
Text
Forced Water Fluoride Poisoning: More People in U.S. Drink Fluoride-adulterated Water Than All Other Countries Combined
Tumblr media Tumblr media
Image source.
U.S. Water Fluoridation: A Forced Experiment that Needs to End
By the Children's Health Defense Team
The United States stands almost entirely alone among developed nations in adding industrial silicofluorides to its drinking water-imposing the community-wide measure without informed consent.
Globally, roughly 5% of the population consumes chemically fluoridated water, but more people in the U.S. drink fluoride-adulterated water than in all other countries combined.
Within the U.S., just under a third (30%) of local water supplies are not fluoridated; these municipalities have either held the practice at bay since fluoridation's inception or have won hard-fought battles to halt water fluoridation.
The fluoride chemicals added to drinking water are unprocessed toxic waste products-captured pollutants from Florida's phosphate fertilizer industry or unregulated chemical imports from China.
The chemicals undergo no purification before being dumped into drinking water and often harbor significant levels of arsenic and other heavy metal contamination; one researcher describes this unavoidable contamination as a
“regulatory blind spot that jeopardizes any safe use of fluoride additives.”
Dozens of studies and reviews-including in top-tier journals such as The Lancet-have shown that fluoride is neurotoxic and lowers children's IQ. Fluoride is also associated with a variety of other health risks in both children and adults.
However, U.S. officialdom persists in making hollow claims that water fluoridation is safe and beneficial, choosing to ignore even its own research!
A multimillion-dollar longitudinal study published in Environmental Health Perspectives in September, 2017, for example, was largely funded by the National Institutes of Health and National Institute of Environmental Health Sciences-and the seminal study revealed a strong relationship between fluoride exposure in pregnant women and lowered cognitive function in offspring.
Considered in the context of other research, the study's implications are, according to the nonprofit Fluoride Action Network, “enormous”-“a cannon shot across the bow of the 80 year old practice of artificial fluoridation.”
A little history
During World War II, fluoride (a compound formed from the chemical element fluorine) came into large-scale production and use as part of the Manhattan Project.
According to declassified government documents summarized by Project Censored, Manhattan Project scientists discovered early on that fluoride was a “leading health hazard to bomb program workers and surrounding communities.”
In order to stave off lawsuits, government scientists:
“embarked on a campaign to calm the social panic about fluoride…by promoting its usefulness in preventing tooth decay.”
To prop up its “exaggerated claims of reduction in tooth decay,” government researchers began carrying out a series of poorly designed and fatally flawed community trials of water fluoridation in a handful of U.S. cities in the mid-1940s.
In a critique decades later, a University of California-Davis statistician characterized these early agenda-driven fluoridation trials as:
“especially rich in fallacies, improper design, invalid use of statistical methods, omissions of contrary data, and just plain muddleheadedness and hebetude.”
As one example, a 15-year trial launched in Grand Rapids, Michigan in 1945 used a nearby city as a non-fluoridated control, but after the control city began fluoridating its own water supply five years into the study, the design switched from a comparison with the non-fluoridated community to a before-and-after assessment of Grand Rapids.
Fluoridation's proponents admitted that this change substantially “compromised” the quality of the study.
In 1950, well before any of the community trials could reach any conclusions about the systemic health effects of long-term fluoride ingestion, the U.S. Public Health Service (USPHS) endorsed water fluoridation as official public health policy, strongly encouraging communities across the country to adopt the unproven measure for dental caries prevention.
Describing this astonishingly non-evidence-based step as “the Great Fluoridation Gamble,” the authors of the 2010 book, The Case Against Fluoride, argue that:
“Not only was safety not demonstrated in anything approaching a comprehensive and scientific study, but also a large number of studies implicating fluoride's impact on both the bones and the thyroid gland were ignored or downplayed” (p. 86).
In 2015, Newsweek magazine not only agreed that the scientific rationale for putting fluoride in drinking water was not as “clear-cut” as once thought but also shared the “shocking” finding of a more recent Cochrane Collaboration review, namely, that there is no evidence to support the use of fluoride in drinking water.
Bad science and powerful politics
The authors of The Case Against Fluoride persuasively argue that “bad science” and “powerful politics” are primary factors explaining why government agencies continue to defend the indefensible practice of water fluoridation, despite abundant evidence that it is unsafe both developmentally and after “a lifetime of exposure to uncontrolled doses.”
Comparable to Robert F. Kennedy, Jr.'s book, Thimerosal: Let the Science Speak, which summarizes studies that the Centers for Disease Control and Prevention (CDC) and “credulous journalists swear don't exist,” The Case Against Fluoride is an extensively referenced tour de force, pulling together hundreds of studies showing evidence of fluoride-related harm.
The research assembled by the book's authors includes studies on fluoride biochemistry; cancer; fluoride's effects on the brain, endocrine system and bones; and dental fluorosis.
With regard to the latter, public health agencies like to define dental fluorosis as a purely cosmetic issue involving “changes in the appearance of tooth enamel,” but the International Academy of Oral Medicine & Toxicology (IAOMT)-a global network of dentists, health professionals and scientists dedicated to science-based biological dentistry-describes the damaged enamel and mottled and brittle teeth that characterize dental fluorosis as “the first visible sign of fluoride toxicity.”
The important 2017 study that showed decrements in IQ following fluoride exposure during pregnancy is far from the only research sounding the alarm about fluoride's adverse developmental effects.
In his 2017 volume, Pregnancy and Fluoride Do Not Mix, John D. MacArthur pulls together hundreds of studies linking fluoride to premature birth and impaired neurological development (93 studies), preelampsia (77 studies) and autism (110 studies).
The book points out that rates of premature birth are “unusually high” in the United States.
At the other end of the lifespan, MacArthur observes that death rates in the ten most fluoridated U.S. states are 5% to 26% higher than in the ten least fluoridated states, with triple the rate of Alzheimer's disease. A 2006 report by the National Research Council warned that exposure to fluoride might increase the risk of developing Alzheimer's.
The word is out
Pregnancy and Fluoride Do Not Mix shows that the Institute of Medicine, National Research Council, Harvard's National Scientific Council on the Developing Child, Environmental Protection Agency (EPA) and National Toxicology Program all are well aware of the substantial evidence of fluoride's developmental neurotoxicity, yet no action has been taken to warn pregnant women.
Instead, scientists with integrity, legal professionals and the public increasingly are taking matters into their own hands. A Citizens Petition submitted in 2016 to the EPA under the Toxic Substances Control Act requested that the EPA “exercise its authority to prohibit the purposeful addition of fluoridation chemicals to U.S. water supplies.”
This request-the focus of a lawsuit to be argued in court later in 2019-poses a landmark challenge to the dangerous practice of water fluoridation and has the potential to end one of the most significant chemical assaults on our children's developing bodies and brains.
Read the full article at ChildrensHealthDefense.org.
© 2018 Children's Health Defense, Inc.
This work is reproduced and distributed with the permission of Children's Health Defense, Inc.
Want to learn more from Children's Health Defense? Sign up for free news and updates from Robert F. Kennedy, Jr. and the Children's Health Defense. Your donation will help to support them in their efforts.
<!--//<![CDATA[ var m3_u = (location.protocol=='https:'?'https://network.sophiamedia.com/openx/www/delivery/ajs.php':'http://network.sophiamedia.com/openx/www/delivery/ajs.php'); var m3_r = Math.floor(Math.random()*99999999999); if (!document.MAX_used) document.MAX_used = ','; document.write ("<scr"+"ipt type='text/javascript' src='"+m3_u); document.write ("?zoneid=3&target=_blank"); document.write ('&cb=' + m3_r); if (document.MAX_used != ',') document.write ("&exclude=" + document.MAX_used); document.write (document.charset ? '&charset='+document.charset : (document.characterSet ? '&charset='+document.characterSet : '')); document.write ("&loc=" + escape(window.location)); if (document.referrer) document.write ("&referer=" + escape(document.referrer)); if (document.context) document.write ("&context=" + escape(document.context)); if (document.mmm_fo) document.write ("&mmm_fo=1"); document.write ("'><\/scr"+"ipt>"); //]]>-->
Tumblr media
0 notes