Tumgik
#app like go mechanic
harrywatson4u · 3 months
Text
Here is the step-by-step car repair app development process that will help you come up with a successful business app like Go Mechanic. Step 1 – Conduct Market Research & Analysis Step 2 – Choose App Features & Tech Stacks Step 3 – Finalize UX/UI Design Step 4 – Begin Car Service App Development Step 5 – Test & Launch the App Step 6 – App Support & Maintenance Step 7 – Market And Promote Your App
0 notes
lunityviruz · 5 months
Text
If you wanna see a loser bitch get mad as hell tell a fanfic writer to PROPERLY tag their fic as rape. Not just write "noncon" in the post but go into the notes and actually tag it as "tw rape" because they get mad asf when you call them out on it. How the fuck am I supposed to find good smut fics when each and everyone of yall are sexualizing and romanticizing rape and using the excuse that it's "dark content" no nigga you're fucking weird. You're a weirdo with a rape fetish and you're projecting it through a fictional character who has nothing to do with that and you refuse to tag it because you want notes and interactions with more people who say shit like "Omg X character nonconning his darling is sooo hot 🥺💗🎀🌸".
Don't get mad at me for calling you out on it because if you didn't write it in the first place, and if you actually tagged it I wouldn't have to see it at all.
24 notes · View notes
mightybeaujester · 1 year
Text
Idk how I never noticed them but
Tim’s background vocals in Frankenstein???
The voice itself but then also the sad little laugh on “dreamt”, the betrayal in it Turning “something’s” into “something is not right” and the emphasis on the confusion and trust it had in Frankenstein The deep, repeated “learning and growing” through the narrated part, showing how the AI is always working in the background
I will never be normal about this guy
109 notes · View notes
occasionalsnippets · 5 months
Text
i downloaded obey me nightbringer yesterday. why tf does it feel like a 20 for 1 app deal
7 notes · View notes
moonluvr-rickst4r · 2 months
Text
i want to live
i want learn multiple languages at once and confuse myself, i want to embarrass myself laughing too loud in public, i want to take on too many hobbies and lose myself in a craft, i want to try slam poetry and stumble over my words. i want fake confidence despite being socially anxious, i want to leave my coat home and shiver in regret. i want to live and to learn.
i want to be hurt, i want to be a friend, i want to experience. i want to feel everything; all at once.
3 notes · View notes
viiridiangreen · 1 day
Text
the extreme dopamine rush of having Finished The Goddamn Painting. vs the need to keep it under wraps for a lil while bc it's for a zine heheheheheheh i pulled an allnighter which i hadn't done for not-gaming purposes since uni. the results are:
kinda nice
the absolute fuckoff biggest giant fucking file i've ever worked on both in terms of total pixel area & layers. i think this might even be print material?????? idk if anyone would like. want to acquire it. considering the figurative dog comic flames all around us. but yes it's a thing that would theoretically look nice printed out. and that could be done at Considerable Size
IT'S ACTUALLY FINISHED. for the first time in years. it's not a nice looking WIP or a dressed up doodle i sat my ass down and rendered like my life depended on it
😭😭😭😭😭😭😭
3 notes · View notes
sodacowboy · 1 month
Text
Okay, so, if I was intentionally trying to make myself feel bad what would I do to achieve that? Scroll through instagram for an extended period of time. Which means that I need to be doing literally anything else.
2 notes · View notes
aropride · 1 year
Text
Will never ever forget the time i was 16 and i opened up a bmc fic bc i was making my way thru the entire boyf riends tag (unsuccessfully, but i did succeed with another ship) and it was a fic about jeremy coping with ptsd by writing absurd amounts of fanfic and i dont think id ever felt that type of deer-in-headlights mirror-held-up-in-front-of-me feeling from a fic in my fucking life . i will be thinking about that forever.
9 notes · View notes
pallases · 7 months
Text
submitted my first app 😖
#😭 didn’t plan to start this early but they said to do it by tonight and now i am worried abt when other companies want their apps in. i#should have asked them#i don’t think they all want them in now tho bc one of them told me she doesn’t start responding until january which. probably means i can#wait a bit right?? i don’t know 😭#personal#the engineering chronicles#feeling pretty okay abt how today went actually one employer told me i have a very high gpa and that she thought she read it wrong and#another i was talking to abt how even though they’re not a primarily medical company they do do medical stuff and i named and spoke abt the#things they’ve worked on and he seemed impressed by that knowledge. so#really worried tho bc. there are hardly any medical places my school has approved to apply to for this and companies that dont do medical#stuff don’t want biomedical engineering interns even if everything but my electives is the same as an ee’s coursework. bc we’re not going t#stick around for them to hire post grad. like ppl from these companies are straight up telling me not to bother applying or that they don’t#accept apps from ppl in my major etc. which fucking sucks especially since in ADDITION to that the vast vast majority of the companies#that Do have medical stuff going on are mechanical or manufacturing based not electrical. like. what do you expect me to do here#there is one company (the one the guy seemed impressed w me abt) that does electrical and coding stuff and i am really really interested in#them. but as i said the medical stuff is not their main focus and they’re more an all around place. and they also won employer or the year#or whatever a couple years ago. which means Everyone is going to be applying to this company. ugh
4 notes · View notes
soliusss · 1 year
Text
Tumblr media
Me realizing I can't gaslight myself out of a ptsd disorder
19 notes · View notes
earmo-imni · 1 year
Text
The logical part of my brain: you have some kind of respiratory disease; even though it’s not covid it’s still dangerous to the toddlers and infants you work with, so the ethical thing to do is tell your boss you can’t come into work tomorrow! Plus you feel bad anyway, your chest and throat hurt and you keep coughing! It’s okay to take off work when you feel bad; in fact, it’s an important part of self-care. And it’s not like you can’t afford to miss work. You have a safety net, it’ll be okay.
The anxiety part of my brain: But they need me, I could just wear my mask and that would keep the kids safe, right? I don’t feel that bad, anyway, I don’t even have a fever, I can just take tylenol for the pain and I can handle it. And how would I tell my boss I can’t come in anyway? I’ve never had to call off sick before. What do I say? Everything I come up with sounds wrong! And I’m not even supposed to tell my BOSS-boss, I need to tell the assistant who keeps track of scheduling but I never put her number into my phone so I can’t do that!
3 notes · View notes
Text
as you held me down you said
Relationships: Nicklas Backstrom/Alexander Ovechkin
Word count: 6.1k
Rating: M/M, explicit (other tags on ao3)
Summary: 
Nicke raises his eyebrows. “You’re the problem, not me,” he says. Now Alex’s thumb is lightly tracing the inseam of his pants, almost lazily, at the crease of his inner thigh. There’s something about the exact pressure that he’s using that’s making it slightly more distracting than it should be. “I would never.”
Alex snorts. “Liar,” he says, leaning in closer. “You fool everyone else, maybe. But can’t fool me.” His mouth is so close that his breath ghosts over Nicke’s lips like a second skin. A strange little shiver goes down Nicke’s spine, in spite of himself. 
“Yeah?” he says. “You know who I am?”
Alex’s mouth twitches. “Always,” he says, and then slides out of his seat and onto his knees in front of Nicke.
(read on ao3) (🔒)
Sometimes you're just minding your own business merrily not looking at the Washington Capitals crashing and burning in their last few games before playoffs, and then your twitter gets inundated with a thousand videos of Alex Ovechkin joyfully showboating in a press box in front of 20,000 fans while Nicklas Backstrom fondly films him, and the only way to cope with it all is to write a thousand words about their gross exhibitionist kink that morphs into 6k about their even grosser marriage ¯\_(ツ)_/¯
9 notes · View notes
chewysgummies · 24 days
Text
Maybe I should make a Twitter account again just to turn it into a killbot 86 fan account since I'm brain rotting over him so badly.
1 note · View note
imaginarypasta · 3 months
Text
completely changing my stance on spoilers (hyperbole) because fundamentally i’m not a person that is bothered by spoilers in appropriate contexts i think & there are many flavors of spoilers that are imo permissible (respecting ofc that some ppl hate them im talking about receiving spoilers) & i think the text should stand on its own. but there are situations where people are so cavalier about deeply relevant reveals/twists/whatever, that are quite… disrespectful to the work.
like say you have this narrative where you spend the whole time thinking one thing but then something happens where it’s revealed the information you’ve been getting the whole time is wrong in some way and now that you know it, you can never go back, but having that change in perspective *and* having that initial reading (or whatever) experience was quintessential to the themes/narrative/etc (especially themes in this case). like i don’t think that should be spoiled (unless someone is like. asking. or you’re having a specific public conversation abt it and they like overhear or something but can leave. like that sucks but it’s also like. what can you do) and the act of spoiling those things just misses? doesn’t care? for the mechanical aspect of the reveal & what it does retroactively, practically, and for any future readings.
and like i get it i’m sure i’ve spoiled things like this for people by accident bc it’s not like i ever tag spoilers for stuff when im ranting to the void about whatever on here. but the specific context for this situation was advertising a book in a comments section of a post not about the book. and the recommendation itself was based on this super important reveal and it just so happened to be a book that i am in the middle of so it sucks
and i think most people are good about acknowledging/recognizing these moments. i say often “i’m not bothered by spoilers” but im also used to hearing (and saying) “i get it but this spoiler greatly impacts your experience. it’s relevant that you already have opinions on something before you learn more” or something along those lines. it’s just when it occurs to me, it always happens to be something i’m super invested in and would’ve thought was like the best twist ever ever
#personal#i’m generalizing the experience because it’s happened a few times to me lately all for situations in which the initial read through being#wrong was an important mechanic while reading#just like. i know im being really dramatic about it when i don’t need to be and also probably hypocritical (i can’t think of a specific#instance but i know for a fact im very cavalier abt revealing spoilers myself. i try to avoid this in one on one conversations but im not#perfect)#i’m literally just pissed and ranting about it. i do think as a general rule this applies to what i think but it’s difficult to put into#practice so it’s like. i need to adapt it i suppose#and it’s just worse bc like. i read the comment and im like ‘well shit that maybe ruins that for me which would probably be my favorite#aspect of the book’ and turned off my phone and left it for a while. (this was my fatal mistake)#and then i go later to open my phone and someone is like ‘oh yeah i was shocked when that happened. here is the specific line btw’ and bc#i’m in the middle of the book i have enough context to understand exactly what’s going to happen#like that’s my fault for not closing out the app but like how could i know and it’s like i’m mad but at who#and it is a situation where commenting ‘spoilers for book’ would be the same as saying the spoiler itself so really i just shouldn’t have#read the comments. but am i supposed to just not do anything for as long as it takes me to read the book?? that’s completely impractical#which is why i’m so cavalier about spoilers overall bc i think the text should stand on its own even when stuff like this accidentally#happens. but if the text relies on the mechanic of ignorance and later reveal i think that’s impressive not bad#AGHHHH it’s complicated and i’m mad…#this is how i feel about the ******** remake too. taking out the reveal (paraphrasing) is… ruining the#narrative is too hardcore but is what i think#see i almost did it myself :(#and yk what it’s also about **************
0 notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
Tumblr media
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
733 notes · View notes
lialacleaf · 6 months
Text
Simon Riley x Reader
Bella Notte - Pt. 1
Tumblr media
Synopsis: Simon’s dog REALLY likes you. And maybe Simon does too. It’s hard to make a move on you though when Riley is determined to embarrass him.
Art by @shkretart because their Simon is my favorite~
Warnings: second hand embarrassment, no editing
It was that time of year between the light chill of fall and the frost of winter, when you needed a coat in the morning and gloves to keep your fingers from going stiff, only to shed your layers for a light jacket until the sun started to set in the early evening.
It was raining again, and as you glanced up at the grey sky from under your umbrella you wondered if the whether persisted into the night you might wake up to a frozen driveway.
Your eyes darted over the address on your phone screen for the hundredth time as you approached the gated neighborhood, taking note of the quaint townhouses smooshed together. You approached the gate with some apprehension, taking note of the security guard who looked ready to defend his post with his very life despite being armed with only a taser.
“Afternoon, Miss,” he greeted, tipping his head at you. Police officers in London were polite more often than not, but you still got a little nervous about speaking to them. The second you opened your mouth they either thought you were a tourist, or coming around to cause trouble.
“Hi, I’m here for-“ you paused to check the address once more. “33 B,” you said, showing him your phone screen that displayed the quaint little pet-service app. “I’m a pet sitter.”
He looked at you contemplatively for a moment, and you swallowed thickly. “You from around these parts?” He asked, and you shook your head.
“I moved to York a few months ago,” you explained, preparing to pull out your IDs when he held up a hand.
“You met the fellow that lives there before?” He asked warily, and you frowned.
“Not in person, but he passed the background check so I’m sure it’s alright,” you argued.
He gave you a good look, as if he were trying to memorize you appearance before nodding to himself and swiping his badge. The gate opened with a mechanical whirring and he beckoned you inside.
You shook your head at the exchange, shoving your phone back into the pocket of your raincoat.
33B appeared to be a relatively new unit, the paint on the door appearing fresh as if it had just been done in the past few days.
There was no welcome mat, and the front porch seemed rather bare. You half expected one of those ‘Home of a German Shepherd’ signs to be hanging on the front door, but there was very little to indicate you were in the right place.
Regardless, you knocked on the door, noticing the lack of a bell.
There was no answer.
You knocked again, this time a little harder.
“Hello? Is anyone there? It’s y/n from TailWag!” You called. You were just about to turn around when the door swung open, revealing a tall man with soft eyes and a thick mustache. He seemed surprised to see you before offering you a polite smile.
“Are you…Simon?” You asked, but the man shook his head. “Oh! I’m so sorry, I-“
“No, no. You’re in the right place. Was just on my way out.” He nodded to you with a smile, stepping around you as he let himself out.
Your watched him leave, brown raised curiously before the clearing of a throat had your head swiveling around.
The sight that greeted you had you feeling like a gnome in the presence of a giant. The man was tall, with a head of messy blonde hair and piercing brown as that had you shaking a little in your bright yellow rain boots.
“Oh.”
He regarded you warily with a raised brow. “Y/n?”
You nodded quickly, almost giving yourself whiplash. There was something so commanding about the way he spoke.
“Right. Come in.”
His home was just as sparse on the inside as it was on the outside. “Sorry if this was a bad time.”
“It’s the time we agreed on,” he stated flatly.
“Right, I just- you had company, and I didn’t mean to interrupt…” you trailed off as he continued to stare at you with that piercing gaze. “So Riley? Where is she?” You asked, getting to the reason for your visit.
Simon let out a sharp whistle that made you jump, and the sound of feet running down the stairs alerted you to the incoming of the four legged creature.
You watched the dog bound around the corner and into the living room, tongue killing and amber eyes alight.
A smile broke out on your face as you kneeled down to give the dog some attention. “Hello there,” you cooed, scratching her behind the ears. “Aren’t you a pretty girl.”
“What brings an American out to York Minster?” He asked, regaining your attention. His eyes were cold and calculating.
“Right. My father moved out here after he and my mother split. He left her out of the will so I came to sell his home when he passed but..the gothic cathedrals kinda grew on me, and I got rather inspired so I decided to stay. Wasn’t much left on the mortgage anyhow,” you explained.
He raised both brows at you curiously. “And you pay for that with dog-sitting?”
You shook your head. “Absolutely not, I’m a Ghost Writer. It makes good money. The dog-sitting is so I feel less lonely,” you said, returning your attention to bestowing Riley with your affection and massaging the scruff around her neck.
“Why not just get a dog?” He asked, crossing his arms over his chest.
You glanced up at him, awkwardly meeting his gaze. “I uhh, I had one, passed away shortly after my Dad. I think she missed him. I haven’t been ready to move on,” you admitted, feeling rather put on the spot with the way Simon was watching you as if he were looking for a flaw, or a reason to kick you out of his home.
“Fair enough,” he agreed, and you loosed a breath. You couldn’t help but feel like you were going to end up with a knife in your throat if you made one wrong move. “I’ll be gone for a few weeks at a time. You live around here?” He asked curtly.
You didn’t like the way he looked at you. It felt…judgmental, as if he were trying to decide if you were trustworthy, or if you were plotting some evil deed. “I live in the other side of town.”
He nodded. “Feel free to use the spare room, the place is more hers than it is mine at this point. She deserves a good retirement,” he said gesturing to the dog.
You blinked as realization finally set in. “Oh! Your military! I see now,” you said, glancing down at Riley who was still patiently seated beside her master.
“So you’re not retired?” You asked, and he nodded. “There are plenty of adoption agencies, and families that take on service animals-“
“I’m her family,” he interrupted, sounding very close to having snapped at you, and you winced.
“Right! Of course, I just meant that pet-sitters are expensive and-“
“You’re concerned I can’t afford to pay you?” He asked gruffly.
“No! No I- That’s not what I meant,” you palmed your face as you stood to your full height, which wasn’t much compared to his. “I’ve been doing this since I was in college and I’ve had more than a few cases of abandonment. It’s usually the ones that are gone a lot. I just wanna know what I’m getting into, alright?” You explained, holding your hands out peacefully as if you were trying to convince a wolf animal not to attack you.
You briefly noted that Riley seems much more manageable than her handler. You, however, we’re too soft hearted, and he simply had to understand that if you were going to care for Riley.
He eyed you for a moment, before nodding in understanding. “If I ever don’t make it back arrangements will be made. You won’t need to worry about that,” he assured you.
You let out a relieved sigh. “Good. We’re on the same page then.”
He nodded in agreement, and you had half a mind to ask him to stop staring at you like he was deciding how to go about skinning you alive.
“I’ll see you tomorrow then,” you said, patting Riley on the head much to her delight.
“My flight leaves early in the morning. I’ll text you a code for the front door.”
Your forced a smile as offered him you hand in a friendly gesture. “Perfect.” He didn’t accept your offered hand, but you weren’t too disappointed. You were just grateful you wouldn’t have to see him for the next few weeks.
AN: ahhh this one is gonna be fun! The inspiration for this story came from my own fur babies, one of which I’m using as my visual for Riley. Can’t wait to share part 2!
1K notes · View notes