Tumgik
#alexi's phd year 1
alexistudies · 1 month
Text
Tumblr media
wednesday, march 13th 2024
i think one of the best things to come out of my PhD (so far) is that i am learning to embrace finitude, and focusing more on the aspects of "what did i learn from this" instead of dwelling on getting a great grade. let me explain.
let's say i was taking this intro to neural engineering class in undergrad, where these hard as hell MATLAB assignments were due like once a month. i would be literally shitting a brick about trying to get it right, staying up late and compromising my health for the assignment, and then stressing like crazy over the grade. for it to all, subsequently and eventually, end up being fine and not the end of the world.
now, as a 1st year PhD student, in this exact position of having a hard as hell assignment that i kind of just figured out as i go, i'm not even trippin. like, i'm probably going to get half credit and i'm genuinely fine with that. why? because I: 1) embraced finitude of knowing that it's just not going to get done and 2) am celebrating the fact that I even learned the concepts to get half credit.
this...is a big mental shift that i'm really trying to cherish, because it took soooo long for me to get here. i also realize that i'm able to have this change in attitude now because undergrad is over, and for a PhD program, grades don't really matter as long as you meet the requirements to stay in your program. in undergrad, you're aiming for good grades because you want to have access to the next goal (job, internship, or grad school). but my god, is this such a RELIEVING mindset shift.
197 notes · View notes
betwixtyiff · 2 years
Note
Will you drop a skincare tutorial, some of us are breaking OUT! And you seem to know what you're doing, so....
Well I'm not gonna drop a full routine because I'm not a dermatologist, nor am I an esthetician, and as such I'm not comfy saying the advice I'm giving is iron clad
However I do feel comfortable giving a few general pointers I've picked up from watching other derms and estheticians on YouTube:
Sunscreen is a necessity. Yes even on cloudy days or when you're indoors. UV (ultraviolet) rays are the biggest factor in skin damage are the biggest cause for skin cancer. A mineral sunscreen uses naturally derived mineral filters to shield you from UV rays, whereas a chemical sunscreen will use synthesized chemical filters to do so. Both have their merits and work for different people
With that in mind, I'll be honest; shopping for a good sunscreen is hard. Especially if you have a dark skin tone. But honestly, if you decide to break the bank for anything in your routine, let it be your sunscreen.
LaRoche Posay is a brand that makes quite a few tinted mineral sunscreens (the tint is so a you're less likely to be on the receiving end of a whitecast) that I've read very good reviews from.
If chemical sunscreens are more up your alley and you're able to, I recommend looking into South Korean sunscreens
The most basic routine you can get is just a cleanser and sunscreen
Introducing an extra humectant to your skin can help a great deal with hydration! Humectants being ingredients that help preserve or attract moisture to the skin. A great one to look for both as a separate serums or as a component in moisturizers is hyaluronic acid!
Finding out your skin type is a good start to finding out which products might work for you. This guide from CeraVe is a great and easy start to that!
If you're acne prone and have tried a multitude of things to try and get rid of it (I myself had horrible acne growing up and even with antibiotics I took for it I never felt like I had it under control until the last couple years) then products containing salicylic acid are something I recommend. Ben
If you can I advise staying away from products containing artificial fragrance. Often times, those fragrances are achieved with essential oils which are normally formulated high amounts of alcohol which can be incredibly drying or irritating to the skin. If you turn the container around and look under the Ingridients label and you see "Fragrance" or "Parfume" or either "Limonene" or "Linolool" that's your tip off
That being said often times even if you have oilier skin, your skin CAN be needing some naturally derived or cold pressed oil to help balance things out! A brand called The Ordinary has a 1 oz bottle of rosehip seed oil that I use every night, just two drops combined with my moisturizer is enough
Above all else: everyone's skin is different and what works for my skin is not necessarily going to work for you
I also highly recommend checking out the following people as they're who I watch constantly when I'm looking for products as well as info:
Dr. Dray - Board certified dermatologist on YouTube and immensely knowledgeable
Dr. Alexis Stephens - Another board certified dermatologist on YouTube who is also very knowledgeable but places a focus on skincare for black people and other darker skintones
Cassandra Bankson - Medical esthetician who is open about her struggles with acne and has a lot of videos relating to such
Michelle Wong/Lab Muffin Beauty Science - Chemistry PhD and self described skincare nerd who has a lot of videos relating to why certain ingredients react with our skin the way they do as well as just the science behind cosmetics in general
7 notes · View notes
earaercircular · 2 years
Text
Materrup, Lhyfe and Corail: three companies committed to low carbon
Tumblr media
Plastic sneakers recovered from the sea, low-carbon cement and concrete, hydrogen produced by wind turbines… Materrup, Lhyfe and Corail adopt models with low environmental impact
Materrup, low-carbon concrete
With 35 patents protecting its Crosslinked clay cement technology[1], the greentech Materr’UP,[2] based in Les Landes[3], intends to transform one of the most polluting industries on the planet in the long term. It accounts for nearly 8% of global greenhouse gas emissions. "We incorporate up to 70% of raw clay and minerals in the cement to procede to a cold reaction that provides the same mechanical properties as traditional industrial techniques", explains the founder Mathieu Neuville, PhD in materials physics who passed through Lafarge[4] research services.
Tumblr media
With this green cement, it guarantees to reduce CO2 emissions by at least half and is working on zero-carbon formulas. Thanks to an investment of 7 million euros, Materrup started up a pilot plant at the beginning of the year. Its capacity: 50,000 tonnes of clay cement offered in the form of concrete breezeblocks and car park slabs produced from materials from nearby quarries.
Lhyfe, green hydrogen
With more than 110 million euros raised on the stock market, the Nantes[5]-based start-up Lhyfe[6] now has the means to deploy its green hydrogen production sites en masse in Europe. And there is no question of producing this fuel from fossil energie! To power its electrolyser, the company undertakes to use only renewable electricity produced by wind turbines.
Tumblr media
A first plant was opened in 2021 in Vendée[7] which will produce up to 1 tonne of clean gas at full capacity from water pumped from the sea with a process, kept secret, that adapts to intermittent nature of renewable energies. "Enough to supply all the buses and garbage trucks in a city of 50,000 inhabitants", claims Matthieu Guesné, president and founder of the company. Lhyfe works on 93 installation projects in France and Europe. Its capacity should reach 200 megawatts in 2026 providing a consolidated turnover of around 200 million euros.
Corail, recycled plastic sneakers
Each pair of sneakers from the young Corail[8] brand contains an average of 8 plastic bottles collected by Marseille fishermen. With this recycling promise, Paul Guedj and Alexis Troccaz, the co-founders of this Marseille-based company have already won over more than 12,000 customers online and cleared the Mediterranean of floating waste by as much (12,000 tonnes). The recovered bottles are reduced to pellets and then transformed into yarn woven in Portugal. "Our goal is to move towards a zero carbon impact", explain the co-founders.
The carbon footprint of recycled plastic is 25 to 60% less than that of virgin plastic (source: Citéo[9]). Shoelaces, inner fabrics, the basic of the outer fabrics and soles are already 100% recycled. There remains the polyurethane layer produced today from corn. With its first profits, Corail acquired its own trawl to secure its stock of raw materials. Offered in a single unisex model in different colours, these shoes brought in 500,000 euros in sales in 2021.
Source
Paul Molga : Materrup, Lhyfe et Corail : trois entreprises engagées vers la sobriété carbone, in : Les Echos, 1-6-2022,
[1] The Landes-based start-up produces a cement based on raw clay from nearby quarries. The technology used and the choice of short circuits drastically reduce its carbon footprint. https://www.usinenouvelle.com/article/materrup-cimente-a-l-argile-crue.N1789297
[2] Materrup, a young industrial company with impact, labelled Greentech and DeepTech, has developed a low-carbon cement, based on raw clay from the nearby quarries. This local cement cuts CO2 emissions in half, without compromising the performance or quality of the concrete. This clay cement is an immediate response to the challenges of the cement/concrete sector in search of less energy-intensive and less carbon-intensive solutions worldwide. Materrup produces and supplies a type 42.5 cement, local, virtuous and efficient. In 2022, this cement is reserved exclusively for its partners. https://en.materrup.com/
[3] Landes is a department in the Nouvelle-Aquitaine region of Southwestern France, with a long coastline on the Atlantic Ocean to the west. It borders Gers to the east, Pyrénées-Atlantiques, to the south, Lot-et-Garonne to the north-east, and Gironde to the north. It also borders the Atlantic Ocean to the west. Located on the Atlantic coast, it had a population of 405,010 as of 2016. Its prefecture is Mont-de-Marsan.
[4] Lafarge is a French industrial company specialising in three major products: cement, construction aggregates, and concrete. It was founded in 1833 by Joseph-Auguste Pavin de Lafarge and is part of the Holcim Group. On 10 July 2015, Lafarge merged with Holcim. Later that July, the new company was officially launched around the globe under the name of LafargeHolcim, which later was renamed to Holcim Group in 2021. Although the merger was completed, the Lafarge brand is retained within the group.
[5] Nantes is a city in Loire-Atlantique on the Loire, 50 km from the Atlantic coast. The city is the sixth largest in France, with a population of 309,346 in Nantes and a metropolitan area of nearly 973,000 inhabitants (2017). With Saint-Nazaire, a seaport on the Loire estuary, Nantes forms one of the main north-western French metropolitan agglomerations.
[6] Lhyfe is a pure hydrogen player that deploys modular 100% green hydrogen industrial production plants for mobility and industry. Lhyfe structures, designs, develops and operates its sites in a sustainable and agile way for clean, accessible and competitively priced hydrogen with minimal environmental impact. https://www.lhyfe.com/
[7] Vendée is a department in the Pays de la Loire region in Western France, on the Atlantic coast. In 2016, it had a population of 670,597. Its prefecture is La Roche-sur-Yon.
[8] The sneakers that clean the sea. 8 plastic bottles in each pair, harvested by our crew of fishermen in Marseille https://www.corail.co/
[9] Citeo is an inter-company coalition at the service of the circular economy https://www.lesechos.fr/thema/articles/citeo-une-coalition-interentreprises-au-service-de-leconomie-circulaire-1371622
0 notes
wild-aloof-rebel · 4 years
Photo
Tumblr media
I’m officially caught up again, so here’s the expected mid-October recs, plus those from the Elevate! event.
.
<1k words
I See You Pretending by Januarium (rated T) The gang do a group costume for Halloween.
Latched by BotchedExperiment *Alexis/Stevie* (rated T) She’s acting strange, but Stevie decides not to ask. Asking questions when it comes to the Rose family is a slippery slope.
.
1 - 3k words
Friday night dinner crowd by upbeat (rated G) Patrick and Twyla have a little talk before David arrives at the cafe for his birthday dinner.
Inside Scoop by swat117 (rated T) Patrick and David: ice cream elitists. It was an accident, they swear.
.
3 - 5k words
spelling it out by helvetica_upstart *Stevie/Twyla* (rated T) “I think there's something weird going on with Twyla," Stevie whispers into her phone. There’s a long pause before David says, “Are you just now noticing or…?” or, 5 times Stevie didn't realize Twyla was a witch + 1 time she did
.
5 - 10k words
Ba Dum Tssss by Januarium (rated E) “You wouldn’t expect me to have any dildos?” David has to ask.[...]
Imposter Syndrome by petrodobreva *Alexis/Stevie* (rated E) She has stayed here so many times over the years; the Roses stopped pretending that Stevie might stay in a hotel when passing through LA or New York years ago. She’s had a key to this place since Alexis bought it. She leaves two sets of sleep clothes here. And a mug.
Just make me come unglued by yourbuttervoicedbeau *Stevie/Twyla* (rated T) 5 things Stevie realises about Twyla, and one thing Twyla realises about Stevie, thanks to a trip to New York City.
know that i’m yours (to keep) by singsongsung *Alexis/Twyla* (rated T) Five times Alexis and Twyla talk at Café Tropical. And one time they talk somewhere else.
.
10k+ words
The Immortals of Schitt’s Creek by middyblue (rated E) In 1915, David and Alexis stumble into a grove of trees and become immortal, just like their new friend Stevie. In the autumn of 2015, a lonely PhD student comes to town, forcing David to reckon with how honest to be about his past and who he is. Fluff! Angst! Found family! An apple festival with an all-you-can-eat pastry contest! Welcome to autumn in Schitt's Creek.
Maybe If You Stayed by bigficenergy *Alexis/Twlya* (rated E) When Alexis gets to the bar, Twyla is already there, looking as ever like sunshine personified in one of the Isabel Marant tops Alexis had given her, paired with some Old Navy jeans. Not even realizing she’d been carrying any tension, Alexis physically relaxes at the sight of her old friend. Years down the line, this is the moment she will look back on as the moment she should have known she was in love.
A Secret Power by Distractivate *Rachel/OFC* (rated M) After her talk with Patrick in Schitt's Creek, Rachel goes on her own journey in search of love, happiness, and identity. Along the way, she and Patrick share parts of their new lives with each other as they try to rebuild their friendship.
La Vie en Rose (The Chauffeur’s Son) by barelypink (rated T) Patrick Brewer has been in love with Alexis Rose since he was a kid. The son of the Roses' chauffeur and personal cook, Patrick has returned to the Rose estate, Rosebud Manor, after years of living abroad and may have finally caught the eye of the object of his obsession. That is, until David Rose makes an appearance and changes everything. An AU based on Sabrina. No prior experience with the movie necessary to enjoy this fic.
43 notes · View notes
barrebard · 3 years
Text
Reading in 2020 - December Edition
Completed
Royal Holiday by Jasmine Guillory
We Have Always Lived in the Castle by Shirley Jackson
The Lottery and Other Stories by Shirley Jackson
You Had Me at Hola by Alexis Daria
Once and Future by Amy Rose Capetta and Cori McCarthy
Party of Two by Jasmine Guillory
The Peter Rabbit Library (Books 1-12) by Beatrix Potter
Charlotte’s Web by E.B. White
The Lion, The Witch, and the Wardrobe by C.S. Lewis
A Christmas Carol by Charles Dickens
The Space Between Worlds by Micaiah Johnson
All Systems Red by Margaret Wells
The Night Witches by Garth Ennis and Russ Braun
The Deep by Rivers Solomon
Not If I See You First by Eric Lindstrom
The Best We Could Do by Thi Bui
The Fifth Season by N.K. Jemisin
The Obelisk Gate by N.K. Jemisin
The Stone Sky by N.K. Jemisin
Heck. That’s a lot of books.
In Progress
Taking Charge of Adult ADHD by Russell A. Barkley, PhD with Christine M. Benton
Solutions and Other Problems by Allie Brosh
Final Goals Update
79/52 books read
22/24 books on Read Harder Challenge
10/10 items on Reading Glasses Challenge
7/20 books read from TBR spreadsheet
Tomorrow I’ll make a Top 5 post, like I did last year, to highlight some of my favorites of the year.
1 note · View note
kuzi-the-hunturr · 5 years
Note
Can you introduce us to your ocs?
All of them? are you sure? bc I have... like 34 Main characters, don’t get me started on secondary ones, oh lord we’d be here from now through sunday!
BUT! Main characters: (I had to get my list for this) In alphabetical order, with a short description:
Aileen: (sims 2) 12 years old, troublemaker and wants to be a witch.
Alexi: (various hi-tech universes, star wars, star trek, etch) space pirate, smuggler, engineer, general nuisance 
Angel (various) Toddler size fairy who like to go treasure hunting.
Anya: (sims 4) resident bookworm and know-it-all, has about 70 PhDs and a dog
Ayli:(Na’vi, Avatar) Big shy, friendly giant
Babela:(various) Oldest (in age) OC, traditional witch, grandma friend, takes no shit, could do without the nonsense
Chizuko:(shaiya online, various) gloomy, assassin, would rather just sleep.
Danhe Shaderune:(Skyrim, Neverwinter online, various) 5 ft nothing worth of sass, rage and tramp.
Djal: (sims 2, star wars) Twi’lek, actual cupcake, has three kids, two dogs, and one (1) hubby.
Djenna McHowl:(Monster High, sims 4, various) tomboy, has big family, unruly werewolf teenager
Driga:(Dragonfable, sims 4, various) Mage, jovial, has a unnecessary collection of staffs.
Fendil:(Elfquest, various) Teeny, Blind, cutie patootie, dog person
George Von Scru: (Sims 2, sims 4, various)  Modern witch, sister friend, is always right.
‘Haleh:(Guild wars 2, various) Plant baby, necromancer, slightly stale cinnamon roll.
I:(self insert #1) runs the household, terrifies everybody, cool, calm and collected, 
Ira Owen:(Hogwarts Mysteries, Harry potter, Sims 4) teenage witch, sibling problems, in love with her enemy, would just like to finish school in peace please and thank you.
Janice Jones:(RDR, sims 4) cowboy, trigger happy, has abt 12 warrants on her.
Kuzi:(wow, sims 4) Main character on this blog, the mom friend, collect all the stuff, animal friend.
Lilly Lime:(sims 3, steampunk) Is somehow always on trouble, weird household, archaeologist, dislikes Anya
Lopi:(Ffxiv) Also tiny, cute, basically neko, has no idea what she’s doing
Magdalena (Maggie) O’Malley: (Sims 1, sims 4) boemian, has a dark side, but is still a cupcake
Me:(self insert #2) comic relief, secretly smart, Big Weird
Mirnah Magali (Homestuck) Weirdo, good person, spaced out, 
Myra (Homestuck) Mirnah’s friend, shy, will adopt any pet she finds, sleepy
Nandani:(Frosaken World) Diva, graceful, sensual
Nr 98 (Remy Martin)(KND, Sims 4) genderfluid, troublemaker, probably wanted for something, closest family member is a bulldog
Pauley Galina:(Various) lawful evil, gangster, mystery noir, lesbian, vindictive AF
Shirai Simone Oniji (samurai of legend, various) calm before the storm, demonic affiliations, can and will hex you, has a big snake.
Shuhei: (4story. various) loves martial art, no nonsense please, lil bit grumpy
Valiant: (Runescape, sims 4) everyone's friend, too many pets, traveler, farmer
Viela: (sims 2, Sims 4, Forsaken World) gloomy gus, leave me alone, emo
Yami:(Pokemon, sims 4) cant sit still, rarely has pokemon in their actual balls, cupcake
Ylang:(Sims 2, Sims 4) plantsim, mom friend, will mother you, beautiful mature woman, has 20 children
Yuun:(Archeage, Sims 4) Dangerous flower, actual human mochi, very traditional, adorable
-----
Characters available for asks on this blog: Kuzi (Troll Hunter) main character on the blog, is probably your friend already Meche (Draenei Paladin) The only responsible adult of the bunch Kieldaz (Draenei Death Knight) Merche’s sister, depressed, gloomy and mana addicted Sominee Stormcrow (Night Elf Demon Hunter) skeptic, searching, hits first, thinks later Valeeria Eveningsong (Nightborne Warlock) curious, suddenly among savages, new bby
-----
@jacobdcheshyre I dont know if this is what you had in mind, but here it is!
4 notes · View notes
embracingwild · 6 years
Text
“i wrote down my goals so when i get them i’ll know i was brave enough to want them” - alexi pappas.
tagged by two of the most lovely humans on the planet @musingsoflulu and @blackcoffeeandblankpagess to write 10 goals,10 years. 
1. Make someone smile, every day. In the last year, I’ve learned what a difference it makes to feel loved and to feel happy. I’ve always liked supporting others and creating laughs, but I am going to make this a conscious effort. Sometimes a smile can change everything.
2. Create meaningful research. I love my work and I love all research. In the next ten years, I want to publish or write something that CHANGES things. Maybe it’s just for one person or one clinic. But I didn’t start this journey to get my name in a paper - I want to make a difference.
3. Runnings goals! I have so many. I would love to run two 1/2 marathons and a marathon within 10 years. And I also want to break 20 in the 5k and start trail running. But above all, I have such a passion for this sport, and I want to keep it. Racing is simply a bonus when you consider how powerful just running throughout the week is. 
4. Finish my masters and phd. Looking at this goal scares me, and imposter syndrome is so real. But I want these degrees so badly, and there is so much change and movement I could create with them. 
5. Learn to rock climb. I’ve been wanting to really learn how to climb for so long and it would be such a wonderful sport for the winters in Canada!
6. Be here now. I have spent too much of my life either worrying about the future, dwelling on the past, or feeling trapped in racing thoughts. There is so much beauty and happiness in being present, in taking things a moment at a time and experiencing the world as it comes. 
7. Get married and find a place to call home. Honestly, there is no time limit or rush on this. I am grateful to have someone I can’t wait to spend quiet mornings with for the rest of my life. 
8. Explore more! I’d love to travel, but money is an issue. So I want to explore in all ways - on trails, in mountains, to a park I’ve never been to. Adventures can happen in so many ways, and I want to find them. 
9. Become more informed. Politically, environmentally, and economically. I have a lot of fear surrounding politics because of my dad, and so I never keep as up to date on things as I should. But I want to be an active feminist, not just take an empty title, and I want to leave a positive mark on this world in as many ways as I can. 
10. Love. Just love life. The grass, running, my work, Sean, my friends, the moon, my tattoos.... I just want to give love. 
i tag the wonderful @cameroncold @healthy-happpy-hipppy @plantedsarah @adventuress-alex @runner-vs-theworld @banannafritter and @runningwildflower
78 notes · View notes
craigrcannon · 3 years
Text
Employee #1: Reddit
Tumblr media
Employee #1 is a series of interviews focused on sharing the often untold stories of early employees at tech companies.
Chris Slowe was the first employee at Reddit. He worked at Reddit for five years, then Hipmunk for five years, and now he’s back at Reddit, writing code.
Discussed: YC’s First Batch, Meeting The Founders, Finishing a PhD While Working at a Startup, Keyser Söze, Reddit as Vocation, Maintaining a Life Outside a Startup, and Returning to Reddit.
Craig : You’re back at Reddit now. What’s your role at this point?
Chris : It’s kinda two things. I started off working on some front page redesign stuff that we’ve got planned. I’m also working on a new version of our algorithm. Our current version is about eight years old. I also wrote that algorithm.
Craig : [Laughter]
Chris : Practically speaking, we’re probably a hundred times bigger than we were when we wrote that, so that was my initial task. I’m also forging one of our new engineering teams, which we internally call “Anti-Evil.” We’re anti-spam, anti-abuse, and sort of anti-cheating. I guess we’re anti-everything. Pro-freedom!
Craig : Right on. And prior to re-joining Reddit you were at Hipmunk. How was it working there?
Chris : I really enjoyed it. I think the thing we learned most of all there was that breaking into travel is really hard. There are a lot of big players and most travel companies aren’t technology companies. I can’t tell you how many times I was on a call and the other person on the phone was referring to their engineering staff as “IT.”
Craig : [Laughter]
Chris : It was like, “Oh, we’re having one of those calls.”
Craig : “Let me get the nerds in here and they’ll figure it out.”
Chris : Yeah!
Craig : That’s hilarious. Ok, so could you give me the rundown of how you ended up at Reddit?
Chris : Sure. I was in Y Combinator’s first batch, along with Steve Huffman and Alexis Ohanian. I was working at a different startup and we were doing desktop search. This was at a time before desktop search was a thing. What kind of killed us that summer was Apple coming out with Spotlight, then Google Desktop came out so we had a hard problem with so many players in the field.
At the end of that summer, my cofounder decided to go to grad school. This was the first YC batch so it was totally different. It was sort of a three month trial to build a product and see what happens. We were at the first demo day, which was actually kind of fun. Something like 20 people showed up.
Craig : Wow. So how did you connect with Steve and Alexis?
Chris : By the end of the summer I had two free bedrooms in my apartment. I was good friends with both of them at that point. I think they originally planned to move back to Virginia but I believe Paul Graham talked them out of it. So they had basically given up on their flat and now needed a place to stay. My cofounder from YC, Zak Stone, was like, “Want to stay with Chris?”. And they were like, “Okay, great.”
Craig : And at what point did you start working on Reddit?
Chris : I want to say like three months later. I was in grad school at the time and I had much more grown-up hours, where I would wake up at 7 or 8 in the morning, go to work, and come back then work on projects at night. Steve and Alexis would sleep in then work until like 4 in the morning.
Because I was up early I’d check Reddit and when it was down I’d knock on Steve’s door and be like, “Hey, site’s down.” After the third time that happened, he just showed me how to log in and start it back up.
Craig : That’s great.
Chris : So I guess my first job at Reddit was in ops. But yeah, at that point it was still Steve writing code and Alexis doing everything else. We were friends and he asked me if I wanted to join, and I did. That was probably six months after Reddit started.
Craig : You were still in grad school studying physics, right?
Chris : Yeah. That’s when I was in my fifth year of grad school.
Craig : And did you have to pause everything to make that happen?
Chris : No. So I’d go to lab and work from 8 to 6 then come home, eat dinner, and join them in the living room to hack for a while. The nice thing is, I was given work that was sort of independent of what everyone else was working on so I wasn’t a blocker.
I think the first thing I worked on was traffic monitoring. This was at a time before Google Analytics. It was like processing access logs and generating summaries and trying to figure out how to do this at scale. I must have rebuilt that damn thing eight times in the first four years.
The thing about that time was we were all learning how to program web apps while we were building them and there wasn’t really a standard operating procedure or anything.
Craig : So you were essentially working part-time?
Chris : Part-time in startup hours but it was like a full-time job. I would normally work from 6 to 2. Then go to sleep, get back up, and do it again.
You know, your 20s are a magical period of time. I could get by on four or five hours of sleep without any major side effects. Basically it was like that for all of 2006. It was like two full-time jobs. The kicker is I somehow managed to meet my wife during that period.
Craig : That’s amazing. So what happens next?
Chris : Well, the four of us – Steve, Alexis, me, and Aaron Swartz – worked on it until the acquisition, which was around Halloween 2006. And it all happened really fast. We were a 15-month-old startup.
I remember the next night I was making pizza with my girlfriend, now wife, and I called Steve and was like, “Hey, we’re making pizza, Do you want to come over?” And he was like, “I am in California.”
Craig : Whoa.
Chris : Yeah. So I was like, “Oh, well, okay then.”
Craig : [Laughter] And so how long did it take before you moved to California?
Chris : I looked for apartments in January and we moved out early February. Part of the agreement with Condé Nast was that — I think it actually said this in the contract — “Chris gets to finish his PhD.”
So I got to the point where I could leave Cambridge and write my thesis remotely. It was kind of a fun transition, going from a full-time job as a researcher and a second full-time job in a startup to a full-time job at an acquired company where I could spend my nights writing a thesis.
Craig : So let’s step back a little bit. Did you think that you would be interested in working with the Reddit guys when they moved into your apartment? Or were you just buddies?
Chris : Probably a little bit of both. At the time it was just because they were buddies and they needed a place. I had no particular plans at all. I was coming off of the failure of my first startup. We were trying to solve this problem of basically like, “I can’t find anything on my hard drive. I have all these areas I can’t search!” What happened practically is that the problem doesn’t come up any more because there is almost nothing on my hard drive that doesn’t exist in some state online.
Craig : Yeah, exactly. So what about Steve and Alexis compelled you to want to work with them?
Chris : At the time it was actually interesting just to be working as a web dev to be honest. Getting into the web scene was kind of a neat thing. I also liked Reddit.
Here’s a funny story. That summer everyone in the first YC batch was a beta tester for Reddit. This was before comments existed, so it was just a bunch of links.
Eventually it kind of opened up and we got a few people Steve didn’t know personally. But for like four months most of the content on the front page was from one of the alt accounts Steve and Alexis had. They were basically populating it as a way to make it seem like there were more people there. Because nobody wants to walk into an empty room. Right?
Craig : Right.
Chris : So my username on Reddit is KeyserSosa, which is a misspelling of Keyser Söze, which is the Usual Suspects villain.
I remember a day, probably in November, when Steve took a day off. He came back a couple hours later and there was new content on the front page and he hadn’t done anything. It was like this moment of like, “Oh, my God! It’s walking!”
Craig : [Laughter]
Chris : And he’s like, “Great! There are actually people on the site who I don’t know and they are posting all the time. There’s this one guy, KeyserSosa, who’s super active!”
They we’re like, “KeyserSosa? Who is KeyserSosa?”
And I’m like, “Oh, hi guys.”
Craig : [Laughter] That’s so good.
Chris : Anyhow. I achieved my peak on Reddit probably in the first year, in terms of being one of the top posters. And then you know, it was all downhill from there.
Craig : Yeah. I was wondering what your relationship with Reddit is now. Not the company, but the community.
Chris : I’ve definitely become much more of a lurker. My use was definitely a side effect of working on it. When I was originally here for the first five years, at the time there was never more than four or five of us working on the site.
Craig : Oh, wow.
Chris : We were kind of professionally understaffed. At least at that point we were really understaffed and always growing at a really phenomenal rate–like doubling every six months. So we were kind of wearing a lot of hats as engineers. We were engineers, and also the community team, and also infrastructure.
I am an introvert who has become an extrovert via the Internet, or something like that. I feel like lots of talking and thinking in that vein is much more draining than sitting and doing engineering work. That definitely contributed to me leaving.
Craig : Yeah, that makes sense.
Chris : So when I left the first thing I did was go on a six-month Reddit detox. Essentially I was like, “Alright. I just can’t look.” And I didn’t look at it.
The thing is, it was and still is like my baby. And I can say that, I have kids now.
When one thing goes wrong, I take it personally. In 2010 I was basically in charge, so everything was either my fault or something I had to deal with. I think the only way to not feel completely attached to all the things that were happening, or whatever mistakes were being made, or whatever drama was happening, was to step away for a little while. You kinda have to do it.
Craig : So how did your relationship with the founders and the early team change over time?
Chris : I don’t think very much, actually. The team was always small so we were and still are a group of friends. I think there’s no other option than to be like comrades in arms in that case. At the very worst, we were the 300 holding back the hordes.
I think because we got acquired so early we had to really justify our budget and keep the team small. We couldn’t get an infusion of cash to grow because we were already bought and so it sort of stunted growth initially. Another side effect is that the look of the site has kind of been the same for a very long time. There’s a whole bunch we have to kind of rebuild.
The flip side of that is that we got really nimble and good at a bunch of things. But we’re now up to I think 120 people. And we’re independent again.
Craig : So now do you have startup-like growth goals?
Chris : We’re kind of acting like a three-year-old startup with ten years of legacy and some good standard operating procedures, which is nice.
Craig : When you look back and consider the early days, how do you feel about Reddit?
Chris : It’s overall positive. It’s been a lot of fun. I mean, it’s been a lot of stress, but it’s also been a lot of fun. Since I’m back now, it’s almost like it’s not so much a part of my career as it’s become my career. Maybe “vocation” is a better word. I still take a lot of the stuff really personally even though I’ve only been back for about six months.
Our fingerprints are everywhere. I think it is fair to say that the snarky tone that still pervades Reddit is an outcropping of Steve. That’s his personality and he kind of imprints it on the community. I think in the same way a company’s tone and culture is a reflection of the founders, so to is the community it creates.
Craig : You’ve been around so many startups. Do you ever have thoughts of doing your own thing again?
Chris : I am very content to be first employee in all things. I’m close enough to be able to hear about the fundraising, and the acquisitioning, and the business side of things. But I do not get invited to any of those meetings, which is just wonderful as far as I’m concerned. Right now, my job here is as an engineering manager. I have a team of like six and honestly, that is a good size for me. I would rather be an engineer who is a manager, rather than a managing engineer, or an office manager, or C-something. I actually enjoy doing the work.
Craig : Right on. Are there any signs that you would advise someone to look for if they are considering being a first employee?
Chris : I would say the first three to six months is gonna be a slog. It’s gonna be a tough slog. That said, startups have culturally matured in the last ten years and it’s been fun to watch. When the first batch started at YC, there was all this talk like, “Oh, yeah, you should work 16 hours and day and not feel bad.”
What’s really great to see is that all those people who were working 16 hours have now grown into their thirties and realized that, “Oh, sleep is really cool.”
Craig : [Laughter]
Chris : And, “You should probably date.” And, “Do you know what is also awesome? Kids. And do you know what kids don’t let you do? Work.” So there’s been this kind of progression from just working all the time to still working hard while also having a life.
Because there are only a few people around in the beginning you have to be willing to switch hats really quickly. Especially for the startups, traffic is irregular, and you’re not up-scaled, and you have to kinda deal with that stuff live.
You’ll also have a responsibility to set the tone for the company. The same holds true with the founders.
Craig : What about the founders? Do you think there any traits successful founders share?
Chris : It sounds trite but determination. Ideas are important. Luck is important. But follow-through is really important. This is sort of separate from the founders but there’s also timing.
After we started, everyone compared us to Digg for five years before Digg had its problems. But we didn’t even know about Digg when we started.
We were a dime a dozen for a while. It was actually funny. There was us and a bunch of Digg clones, which was amusing.
Craig : Right on. Let’s stop there. Any last words of wisdom?
Chris : The internet has a long memory!
0 notes
opedguy · 3 years
Text
Biden Pushes U.S. to Confront Russia
LOS ANGELES (OnlineColumnist.com), Feb. 5, 2021.--Pushing 78-year-old President Joe Biden and his 58-year-old Secretary of State Tony Blinker into a confrontation with 68-year-old Vladimir Putin, the U.S. news media continues to stir to pot on 44-year-old dissident Alexi Navalny’s arrest, conviction and sentencing for violating his probation.  Navalny was poisoned in Tomsk, Siberia Aug. 20, 2020 with Novichok, a Soviet-era poison only used by Russia’s FSB [formerly KGB] security service.  After recovery in Berlin, Navalny blamed the poisoning on Putin, prompting violent street protests around the country.  Navalny, who was sentenced to two-year-eight-months in prison in a Moscow court Feb. 2, blamed Putin for attempted murder.  Biden and Blinker demanded that Putin immediately release Navalny and all other political prisoners.  Biden said today that the U.S. will not be “rolling over” when it comes to dealing with Putin.      
       Biden’s talking tough today to the State Department staff but all recall March 1, 2014 when Putin invaded and annexed Ukraine’s Crimean Peninsula with former President Barack Obama and Biden doinmg nothing.  Now the U.S. press reports that the 55-year-old emergency room physician Sergey Maximishin that treated Navalny in Omsk, Siberia has died suddenly of a heart attack.  Whether Maximishin died of foul play or not, how much does Biden want to destroy U.S.-Russian relations for a publicity stunt?  “With regret, we inform you that  . . . the deputy chief physician for anesthesiology and resuscitation of the emergency hospital No. 1, assistant of the department of Omsk State Medical University, PhD of medical sciences, Maximinshin Sergey Vlaentinovich suddenly passes away,” the hospital said in a statement   U.S. press implied strongly that Maximinshiin was murdered by the FSB.      
       Whatever happens inside the Russian Federation, the American press has made a strong case that Putin continues to use his power to cover-up his role in Navalny’s Aug. 20, 2020 Novichok poisoning. Maximinship attended to Navalny immediately after his Novichok poisoning, putting him in a medically induced coma to save his life.  While Russian authorities say that Maximinishn died of natural causes, the U.S. press and Navalny’s followers think Putin engaged in foul play.  Biden said today that he wouldn’t let Putin get away with murder anymore, something he accused Trump of doing.  Demanding the Navalny be released from prison, Biden has thrown down the gauntlet only two weeks after his inauguration.  House and Senate Democrats are busy preparing Trump’s second impeachment trial next week, despite that fact impeachment is only used to remove a president from office.      
       Reporting on every detail of the wild speculation about Putin news role in ordering the FSB to murder the ER doctor that treated Navalny, the U.S. press pushes Biden closer to a confrontation with Russia.  Even Navalny’s top aid said it’s “not uncommon for doctors of his age to suddenly die,” yet the U.S. press certainly hasn’t given Putin the benefit of the doubt.  With western journalists obsessed with Navalny’s case, it’s looking a lot like the Oct. 2, 2018 disappearance and death of Washington Post contributor Jamal Khashoggi at the Saudi embassy in Istanbul.  Trump refused to sanction 35-year-old Crown Prince Mohammed bin-Salman, the de facto leader of Saudi Arabia. Like Khashoggi, the press can’t stop its obsession with Navalny, pushing Biden and Blinken into another Cold War over Navalny’s Feb. 2 sentencing to two-years-eighth -months in prison.      
       Focusing on Navalny’s ER doctors on Omsk, the media continues to go after Putin, demanding he be removed from office.  Navalny not only exposed Putin’s corruption, he wants the Russian people to rise up and toss him from office.  Biden told the State Department today that he has returned to diplomacy, now that he’s threatening Russia with sanctions.  Biden slammed Trump for “rolling over” for Putin, a popular Democrat talking point started by former Secretary of State Hillary Rodham Clinton in the 2016 campaign.  Hillary’s paid opposition research AKA “the Steele dossier,” accused Trump of colluding with the Kremlin to win the 2016 election.  Former President Barack Obama and Biden used Hillary’ rubbish dossier to open up CIA and FBI investigations into Trump alleged ties to Moscow, something completely fabricated by former British MI6 agent Michael Steele.    
         U.S. and European media outlets are now obsessed with Navalny, seeking to create an uprising in Russia to topple Putin. [[Maximishin] knew more than anyone  else about Alexy’s condition so I can’t dismiss possibility of foul play,” said Leonid Volkov, Navalny’s chief of staff.  Taking the bait, the U.S. and foreign press have been gaslighted by Navalny’s PR team seeking to get the world press, especially the U.S. and EU behind toppling Putin’s government.  Since nationwide protests over Navalny’s arrest, conviction and sentencing, Putin has cracked down on Russian protesters, arresting thousands around the country.  Biden and Blinken don’t get that they don’t have the support from the, U.N., EU and NATO to denounce Putin, knowing the amount of energy the EU buys from the Russian Federation.  Focusing on Maximishin’s mysterious death only makes matters worse
 About the Author  
John M. Curtis writes politically neutral commentary analyzing spin in national and global news.,  He’s editor of OnlineColumnist.com and author of Dodging The Bullet and Operation Charisma.    
0 notes
runningandsunning · 6 years
Text
“i wrote down my goals so when i get them i’ll know i was brave enough to want them” - alexi pappas.
10 years, 10 goals. Tagged by the lovely @kiara-shannon ! Thankful for this tag because I love love love goals and also badass, amazing, inspirational women speaking their goals into the existence and supporting each other!
1. Be outside more often. Get my butt into nature for at least 15 minutes a day no matter the temperatures, the weather, my mood. I love nature and it always cheers me up so I need to make more of an effort to appreciate my surroundings
2. Read and write more. I used to absorb books as fast as I could get my hands on them, but lately I’ve been growing a stack beside my bed untouched with all the desire to read them but no motivation to begin. I used to write constantly and at some point during college and the deeper sides of depression I gave it up and haven’t gotten back into the swing of it but I hope to start again.
3. Love the people around me so much that they could never think that no one loves them. Shower them with love and encouragement and light. Be the person that I needed for such a long time for others. Remind them why they’re here, why you love them.
4. Be more intentional in my life. Spend time in ways that make me happiest, not how I think I’m supposed to live. Pursue my passions at every cross road, love more, live more, be more with purpose.
5. Be more environmentally conscious. As someone who studied environmental biology in college I have some habits that need changing- eating more plant based, reducing plastic use, buying products that are good for the environment, and using my voice/social media as a platform for what I’ve learned. Hopefully in the next ten years the world will re-adjust their views of science and the environment and recognize the dire consequences of humans on the planet.
6. Get my masters and then my PhD in an environmental biology field, specifics I don’t yet know. Also related get a job that uses your degree!! Environmental job that’s long term is the big goal here
7. Marry the love of my life within ten years. Make life as beautiful and simple and magical as I can with my best friend by my side. Start a family with him and make home a tangible place.
8. Travel! Travel! Travel! My list grows all the time and I have yet to conquer any of it. The galapagos, the Appalachian trail, Arizona, the redwood forest, Yellowstone, the JMT, Canada, so many others. Just start crossing them off the list
9. Run when I want to not when I think I need to. Fall back in love with it, never let it become a chore or an obligation. Sign up for more races. Start small, stay healthy and strong, go longer, get faster. I would eventually love to run an ultra. But I want part of my goal to not be to heartbroken if my injuries never let that happen. Just be content to run because I have legs that are capable of movement
10. Be open, be happy, learn. Never stop learning.
I tag @mountains–and–miles @cerulean–stars @mountaindaze @toethefinishline @ohhforfoxsake @theghostofcaroline @runpeachy and anyone else who hasn’t already been tagged!!
11 notes · View notes
alexistudies · 2 months
Text
Tumblr media
monday, march 11th 2024
spring break came and went in the blink of an eye, but it was lots of fun! met up with friends from HS, went to disneyland, and spent time with my family. I also still did some work over break that was time sensitive, but I can't really complain :)
its officially past the halfway point of the semester and i can see the finish line. ahhhh. i also realize i need to start studying more for my classes because things are getting more complex... BUT WE WILL SURVIVE
follow my studygram if you want a more frequent, up close in my daily academic life - IG: lexthephdstudent
265 notes · View notes
physics-dirtbag · 6 years
Text
“i wrote down my goals so when i get them i’ll know i was brave enough to want them.” - alexi pappas.
10 years, 10 goals. tagged by @northruns. thanks for tagging me in this, it’s unique & timely given it’s a brand new year!
1. Stay consistent with running. As conflicted as I feel about the sport right now, I truly enjoy it and it (usually) makes me feel better. I don’t want to feel obligated to run fast or far, I want to relearn to enjoy running just for running’s sake. I’m getting there. (I’m thinking a summer training plan while I’m not in school? Probably not the greatest idea, but I don’t like being bored. I'll give this more thought…)
2. Graduate from undergrad. Right now, it’s looking like this will happen in 2020, but obviously, things happen and that is subject to change. And if it does, that’s okay. However, I also know that I enjoy learning and the structure of school, so that is why I’m adding this as a goal.
3. Get into/graduate from graduate school (and maybe get my phd). Most likely in physics but who knows!
4. Find hobbies outside of school and running that I genuinely enjoy. (Bonus if I put myself in social situations I wouldn’t normally!)
5. Continue reading a lot!! Any books!! All books!! All genres!! Bring on the books!! Also, continue to go to bookstores, even if it’s just for fun!!
6. Learn how to productively and effectively cope with my mental illnesses. This may include going to therapy, continuing meds, etc.. Regardless, I hope to discover the person I can be when not desperately clinging to fragments of who I used to be. In order to do this I need to be willing to face certain things I have avoided previously.
7. Find a cool job that I enjoy, whether it be a summer (physics) internship, part time as I am in school (although I do enjoy my current gig as a dog sitter/walker), or a longer-term thing after I graduate. Or all of the above!
8. Be willing to put myself out there in order to make friends/form relationships.
9. Write. More. I started writing a book last spring, but I stopped because I didn’t think it fit with my personality as it focused on me as a human person rather than me as robot. Maybe I’ll pick it up again, maybe not. In the hospital, I wrote a lot because I was bored all of the time (and therefore thinking all of the time, hence more ideas to write on) and encouraged to write about my feelings. Now, I don’t have as much time and tend to avoid anything humanizing altogether, but honest writing is important, and I want to get back to actually doing it rather than the half-assed, superficial scrawls that are currently happening.
10. As I have alluded to in previous goals, learn to treat myself as a human being, not as a robot. This includes letting go of black or white thinking, as well as not focusing on productivity/effectiveness all of the time. Mistakes are okay. Balance is good, vital even. 
tagging @championsaremade @littlebean-jellybean @theevergreensoul @thelowbrass and anyone who has not done this yet and would like to do it!!!
3 notes · View notes
evoldir · 4 years
Text
Fwd: Graduate position: LilleU.PleistoceneHydrosystems
Begin forwarded message: > From: [email protected] > Subject: Graduate position: LilleU.PleistoceneHydrosystems > Date: 11 April 2020 at 06:51:08 BST > To: [email protected] > > > > Interdisciplinary PhD project – Geosciences – Evolutionary Biology > (INSU-INEE) > > Institutions: UMR 8198 Evo-Eco-Paleo, CNRS/Lille University; CEREGE, > Aix-en-Provence; UMR 7516, CNRS/Strasbourg University Primary lab > attachment: UMR 8198 Evo-Eco-Paleo, CNRS/Lille University Doctoral school: > Lille University ED104 Sciences de la matière, du rayonnement et de > l’environnement (SMRE) – branch:  Geosciences Ecology Paleontology > Oceanography > > Project title: Reconstructing Plio-Pleistocene hydrosystems in the > Omo-Turkana Basin with integrative studies of sedimentology and freshwater > mollusks Project: EnviroMolSed (CNRS 80|PRIME project) Promotorship: > Dr. Bert Van Bocxlaer (CR CNRS), Dr. Alexis Nutz (MC AMU); Dr. Mathieu > Schuster (DR CNRS).  Email contact: bert.van-bocxlaer|at|univ-lille.fr, > nutz|at|cerege.fr, mschuster|at|unistras.fr Start date and duration: > 1 October 2020 for 3 years. > > Vacancy description > We are pleased to announce a PhD fellowship for a highly motivated, > enthusiastic and independent person with a keen interest in > the paleontology of freshwater mollusks and their application to > paleoenvironmental reconstruction through the integration of taphonomy > and sedimentary geology. Background knowledge of evolutionary biology, > morphometrics, ecological data analysis, facies analysis, sequence > stratigraphy and enthusiasm to participate to fieldwork in Africa are > plus-points. > > Project description > Freshwater mollusks are common in lakes, rivers and wetlands and, > hence, they record living conditions in continental hydrosystems. They > leave abundant fossil remains in the deposits of various basins in the > East African Rift, but despite their unique potential to reconstruct > paleoenvironments, including those in which hominids evolved, > they remain underutilized compared to terrestrial vertebrates. We > propose a trans-disciplinary PhD project on the sedimentology, > taphonomy and paleontology of late Cenozoic mollusk assemblages of the > Omo-Turkana Basin to reconstruct hydrosystems of the basin in space > and time. Although the basin harbored various paleolakes, it is unclear > whether changes in aquatic communities coincide with major lacustrine > transgressions and regressions, and how environmental change affected > biotic communities. We propose to study freshwater mollusk communities > over time from stratigraphically constrained shell beds together with > depositional facies and basin-scale sequence analysis. > > Setting and requirements > The project is funded by the CNRS 80|PRIME initiative and will be > developed in an inter-institutional collaboration between the UMR 8198 > Evo-Eco-Paleo of the CNRS and Lille University, the European Centre > for Research and Teaching in Environmental Geosciences (CEREGE) in > Aix-en-Provence and the Institute for Earth Physics (UMR 7516) of the > CNRS and Strasbourg University. Furthermore, this project is embedded > in an ongoing GDR on the East African Rift that brings together a > larger research consortium. Lille University is the diploma-granting > institution for this PhD project, so that the successful candidate > will be subscribed to a doctoral school of Lille University. Master > students that are graduating over the summer are welcome to apply. More > information on studying at Lille University can be found on the Lille > University webpage: https://ift.tt/3c4Knk8. > > Profile of the candidate > -       Master’s degree in a relevant field (geosciences, paleontology >        or paleobiology or equivalent) > -       Eager to acquire new competences and knowledge > -       Fluent in English, knowledge of French is a plus-point > -       Ability to work in an interdisciplinary and collaborative >        environment (independency, reliability, integrity) > -       Ability to write clear scientific reports and disseminate >        results > -       Have good non-academic attributes (e.g. maturity, open- >        mindedness, respectfulness) > > Interested? > This vacancy will be published at the beginning of May on the > CNRS employment portal and will be available for 21 days. Only > applications through the employment portal are eligible. In the > meantime feel free to contact the abovementioned promotors for informal > inquiries about the project. Feel free to contact Bert Van Bocxlaer > (bert.van-bocxlaer|at|univ-lille.fr) to receive detailed application > instructions from the moment they become available. > > > > Bert Van Bocxlaer > > via IFTTT
0 notes
sociologyontherock · 4 years
Text
The Clipboard
By Stephen Harold Riggins
Books and Theses
Rosemary Ricciardelli, Also Serving Time: Canada’s Provincial and Territorial Correctional Officers. Toronto: University of Toronto Press, 2019.
Peter Baehr, The Unmasking Style in Social Thought. London: Routledge, 2019. A symposium on this book is forthcoming in The Canadian Review of Sociology.
The symposium in The American Sociologist is now available as Online-first Articles.
Daniel Kudla, “Business Improvement Areas and the Justification of Urban Revitalization: Using the Pragmatic Sociology of Critique to Understand Neoliberal Urban Governance.” PhD dissertation, Department of Sociology and Anthropology, University of Guelph, September 2019.
Tumblr media Tumblr media
Articles
 Judith Adler, “Tocqueville Mortal and Immortal: Power and Style.” In The Anthem Companion to Alexis de Tocqueville, Daniel Gordon (Ed.). London: Anthem, 2019, 45-64.
 Judith Adler, American Journal of Sociology, 124(6), 2019, 1848-1850. A book review of Gary Alan Fine’s Talking Art: The Culture of Practice and the Practice of Culture in MFA Education. “Gary Fine’s ethnographic study of three university-based graduate programs in art is sure to be recognized as an essential text in the sociology of art and the sociology of higher education.”
 Peter Baehr, “Unmasking Religion: Marx’s Stance, Tocqueville’s Alternative.” In The Anthem Companion to Alexis de Tocqueville, Daniel Gordon (Ed.). London: Anthem, 2019, 21-44.
 Emmanuel Banchani and Eric Y. Tenkorang, “Determinants of Low Birth Weight in Ghana: Does Quality of Antenatal Care Matter,” Journal of Maternal and Child Health, February 2020. Online-first Article.
 Leslie Butler, Ewa M. Dabrowska and Barbara Neis, “Farm Safety: A Prerequisite for Sustainable Food Production in Newfoundland and Labrador,” Canadian Food Studies, 2019, 6(1), 117-135.
 Nilima Gulrajani and Liam Swiss, “Donor Proliferation to what ends? New Donor Countries and the Search for Legitimacy,” Canadian Journal of Development Studies, 2019, 40(3), 348-368.
J. Scott Kenney, “Western Civilization, Inequality, and the Diversity Shell Game,” Academic Questions, 2019, 32(3), 354-360.
Daniel Kudla, “Urban Authenticity as a Panacea for Urban Disorder? Business Improvement Areas, Cultural Power, and the Worlds of Justification.” In Planning and AuthentiCITIES. New York: Routledge, 2018, 75-93.
 Daniel Kudla and Michael Courey, “Managing Territorial Stigmatization from the ‘Middle’: The Revitalization of a Post-industrial Business Improvement Area,” Environment and Planning A: Economy and Space, 2019, 51(2), 351-373.
 Daniel Kudla and Patrick Parnaby, “To Serve and to Tweet: An Examination of Police-related Twitter Activity in Toronto,” Social Media and Society, 2018, 4(3), 1-13.
 Vincent Kuuire, Eric Y. Tenkorang, Prince Michael Amegbor, Mark Rosenberg, “Understanding Unmet Health-care Need among Older Ghanaians: A Gendered Analysis,” Aging and Society, January 2020. Online-first Article.
 Barbara Neis and Katherine Lippel, “Occupational Health and Safety and the Mobile Workforce: Insights from a Canadian Research Program,” New Solutions: A Journal of Environmental and Occupational Health Policy, 2019, 29(3), 297-316.
 Anton Oleinik, “On the Role of Historical Myths in Nation-state Building: The Case of Ukraine,” Nationalities Papers, 2019, 47(6), 1-17.
 Nicole Power and Moss Norman, “Re-inscribing Gender Relations through Employment-related Geographical Mobility: The Case of Newfoundland Youth in Resource Extraction,” Canadian Journal of Sociology, 2019, 44(3), 283-308.
 Alice Pearl Sedziafa, Eric Y. Tenkorang, Adobea Owusu, “Can Marriage (Re)produce and Legitimize Sexual Violence?: A Phenomenological Study of a Ghanaian Patrilineal Society,” Women’s Studies International Forum, 77, November-December, 2019.
 Jeffrey van den Scott and Lisa-jo K. van den Scott, “Imagined Engagements: Interpreting the Musical Relationship with the Canadian North,” Qualitative Sociology Review, 2019, 15(2), 90-104.
 Newsworthy
 Lisa-Jo K. van den Scott received the Helena Lopata Excellence in Mentorship Award from the Society for the Study of Symbolic Interaction.
 Rosemary Ricciardelli received the 2019 MUN Presidents Award for Outstanding Research.
 MA student Laura Squires was awarded the Social Sciences and Humanities Research Council of Canada Masters Graduate Scholarship in support of her MA thesis research. Her thesis project is titled “Are Correctional Programs in Newfoundland Effective? Examining the Experiences of Justice-involved Individuals with Mental Illness and Substance Use Disorders.” Her supervisors were Adrienne Peters and Rose Ricciardelli.
 David Chafe (MUN PhD in sociology) was featured in a CBC Radio story about his career in business, academia, and music. David has recently released a recording of piano pieces titled Still. The launch of the disc can be seen on a YouTube video. Music on the disc includes pieces by Grieg, Mendelssohn, Chopin, Brahms, Schumann, Moszkowski, and Rachmaninoff.
 Judyannet Muchiri, PhD Proposal Presentation, “Safe Spaces for Young Women’s Civic Participation in Kenya,” October 2019.
 The Newfoundland and Labrador Organization of Women Entrepreneurs featured a profile of MA student Ifeoma Ineh’s experiences of the MUN Entrepreneurship Training Program.
https://www.thenloweadvisor.org/post/profile-ifeoma-ineh
 The Department of Sociology sponsored the Henrietta Harvey Lecture “Writing Ocean Histories” by Helen Rozwadowski, Professor of History and Maritime Studies at the University of Connecticut and author of Vast Expanses: A History of the Oceans.
 Janet Harron, “What we don’t know: Sociologist Collaborates with First Light to Uncover St. John’s Indigenous History,” The Gazette, September 18, 2019. The article features the work of Rochelle Coté.
https://gazette.mun.ca/public-engagement/what-we-dont-know/?utm_source=hootsuite&utm_medium=twitter&utm_term&utm_content&utm_campaign&fbclid=IwAR0KtbcOtanqb4m0W_4jg6feELWMyBCBmlHk1hTgerNSLtkTGMx0Zz7ayrI
 Lecture by William Herbert, Banting Postdoctoral Fellow, Memorial University, “Trans Rights as Risks: On the Ambivalent Implementation of Canada’s Groundbreaking Trans Prison Reform,” October 25, 2019.
 Stephen Harold Riggins and Paul Bouissac celebrated their fiftieth anniversary in Germany and France in October. Their relationship is documented in the book The Pleasures of Time: Two Men, a Life. (Toronto: Insomniac Press, 2003). Since retiring, Stephen has published two books of his photographs: Newfoundland, Ontario, Indiana: 1963-2018 and Quilt Blocks by Susan Ledgerwood. For the past two years he was been working on an edition of poems and interviews by Richard Brooks Hendrickson (1925-2019) in addition to his on-going research project about the history of the MUN Department of Sociology. Paul Bouissac’s seventh book on the anthropology of the circus appeared in 2018, The Meaning of the Circus: The Communicative Experience of Cult, Art, and Awe (London: Bloomsbury Academic Press).
0 notes
seomiamiseo · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
via Blogger https://ift.tt/2ImKskX
0 notes
raymondcastleberry · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Raymond Castleberry Blog http://raymondcastleberry.blogspot.com/2018/05/a-machine-learning-guide-for-average.html via IFTTT
0 notes