Tumgik
#authorship bias
justpostsyeet · 1 year
Text
You know according to tolkien's legend, the accounts of middle earth came to the author via Hobbits and the manuscripts left behind.
In the eyes of Hobbits elves were is super kind dudes who gave them food and shelter but in context it was only because Bilbo was with Gandalf on a fucking journey to reclaim Erebor and the rest of hobbits were in the main plan to kill literal Satan.
And about the manuscript thing, the relics only survive when they're kept in best condition. Elves were the superior species of middle earth with advance technology so,their version of history could be best preserved.
I'm not saying that all elves where bad or something, What I'm saying is what if the middle earth history has an authorship bias. There are so many instances in which flawed deeds of elves are hidden in flowery language such as Eöl took (unwilling)Aredhel as his wife. How at many instance which you read about Galadriel, in subtext she seems like a power hungry absolutist but her causes are always shown as one of a noble lady fighting for a great cause. Only flawed elves are shown in Silmarillion.
So, what if elves were flawed like the second born and other creatures of Middle earth but they always appeared noble and great due to authorship bias.
And by authorship bias,I don't mean authorship bias by tolkien but if we go according to the myth of middle earth by tolkien,then it's the authorship bias of whoever write the said history of middle earth which tolkien later found and translated. So, it's not Tolkien fault because he's translating what he's given but the fault lies in the hands of whoever wrote the original verses.
Why I'm rambling all these? It's because we were studying about authorship bias and bending of actual events while writing the history. It made of think about the history of middle earth.
294 notes · View notes
laiqverse · 1 year
Text
The Ethical Implications of AI in Design
As artificial intelligence (AI) continues to play a larger role in the design process, there are important ethical considerations that designers and technologists must grapple with. From issues of bias and fairness to questions of ownership and creativity, the use of AI in design raises important questions about the future of the field. Bias and fairness in AI design One of the most pressing…
Tumblr media
View On WordPress
0 notes
a-magical-evening · 6 months
Text
Deconstructing South Park: Critical Examinations of Animated Transgression
Trey Parker, Matt Stone, and Authorship in South Park and Beyond, Nick Marx
BASEketball might be seen to exhibit some of the characteristics Justin Wyatt identifies as accommodating queerness in purportedly “straight” comedies. Of Swingers (1996), Wyatt describes the film’s lack of a “strong male institutional bias” that directs the spectator to construct male bonding as straight. […] Amidst the success of films like There’s Something about Mary, critics identified a “farts and phalluses fixation” in the cycle of gross-out comedies during the summer of 1998. Parker and Stone indulge in much of the same carnivalesque humor throughout BASEketball, but ultimately use queerness as a way to undermine viewer expectations about this humor. That is to say, the film functions in a conventionally parodic way—hinting at homosexual bonds among its male characters—until its climactic scene, in which main characters Coop and Remer kiss and queerness becomes explicit. At this moment Parker and Stone connect their performances in BASEketball to publicity discourses that reinforce the duo’s oppositionality.
BASEketball also constructs queerness through the aforementioned idiosyncratic patter of Parker and Stone. In Swingers, Wyatt argues, “Queerness resides primarily in the forms of communication and interaction between the friends in the group,” noting that its protagonists speak in a highly coded conversation with repeated use of words like “baby” and “money.” A similar tendency manifests in Coop and Remer’s use of the word “dude,” a nod to the “dude-speak” carried over from their South Park authorial persona.
Another method that Wyatt identifies for reading the male bonding in Swingers as gay is that with women, “flirtation rather than seduction is most significant.” […] This dynamic plays out between Coop and Remer as well. […] Indeed, a queer reading of the relationship between Coop and Remer points to the larger, conventionally parodic project at work in BASEketball— suggesting that its professional-athlete protagonists can be read as gay, thus undermining the archetypal, hypermasculine image of athletes currently circulating in sports imagery.
But the film’s climactic scene, in which the characters share a sloppy open-mouthed kiss, subverts the queer coding that had up to that point been only connotative. In other words, if part of BASEketball’s goal is to lampoon hypermasculinity by providing gay subtexts, why make the gag explicit and take it over the top? That the moment was improvised only serves to muddle matters. […] To [Parker and Stone], the kiss seems to be no big deal, having just as much a place in mainstream comedy as the gay subtexts that have existed there for years. [X]
10 notes · View notes
sabakos · 10 months
Note
🔥
First off, my own personal bias on textual criticism is that I think basically every single text that circulated privately prior to widespread publication was subject to a large degree of accretion and redaction during that time period, which only ever stops in the case of widespread transmission. And I think that this should be the default assumption until proven otherwise. This has been shown fairly conclusively with the Torah and a few other works of religious importance, but I think it's underexamined for secularized works like the Iliad and Odyssey, although that itself has shifted in the past few decades with a greater number of scholars appreciating the polyphony inherent in these works to the point where the Homeric Question is a bit passé and even the "Hesiodic question" starts to look like it's going to give way to multiple authorship before even being considered.
But the truly spicy take here is that I think the idea that Plato and Aristotle wrote most of the content in the works attributed to them is probably also a minority opinion among critical scholars with expertise and ability to evaluate such matters, but that few of them want to say anything definitive, even though many of them regularly hint such things. They do this because if a work wasn't entirely written by the famous genius person, but instead accreted material over centuries, that will make funding agencies stop paying anyone money to study them, and those works that are "dubia" will then start being left out of published editions and translations. John Cooper already had to fight tooth and nail to get the entire Thrasyllan Canon into the Hackett edition of Plato's Complete Works. Which is because funding agencies are philistines, it doesn't matter what percentage of the Timaeus one specific Athenian asshole wrote, what matters is it was the most important
So effectively I believe there's an omerta, a "noble lie" if you will, among classical scholars trained in philology not to do any textual criticism on Plato or Aristotle anymore like they used to a century ago. Instead they're content to let the less informed "literary" critics (who can't even read Greek well) fight over the authenticity of already decanonized works like First Alcibiades and Rival Lovers on the basis of supposed "Platonic" doctrine or "stylometric" differences. And meanwhile the actual experts know there's a good chance Plato didn't write even half of the Republic.
9 notes · View notes
CFP: AI and Fandom
Unfortunately, this special issue will not be moving forward. All submitted pieces are being considered for our general issue. 
Due in part to well-publicised advancements in generative AI technologies such as GPT-4, there has been a recent explosion of interest in – and hype around – Artificial Intelligence (AI) technologies. Whether this hype cycle continues to grow or fades away, AI is anticipated to have significant repercussions for fandom (Lamerichs 2018), and is already inspiring polarised reactions. Fan artists have been candid about using creative AI tools like Midjourney and DALL-E to generate fan art, while fanfiction writers have been using ChatGPT to generate stories and share them online (there are 470 works citing the use of these tools on AO3 and 20 on FanFiction.net at the time of writing). It is likely the case that even greater numbers of fans are using such tools discreetly, to the consternation of those for whom this is a disruption of the norms and values of fan production and wider artistic creation (Cain 2023; shealwaysreads 2023). AI technology is being used to dub movies with matching visual mouth movements after filming has been completed (Contreras 2022), to analyse audience responses in real-time (Pringle 2017), to holographically revive deceased performers (Andrews 2022; Contreras 2023), to build chatbots where users can interact with a synthesised version of celebrities and fictional characters (Rosenberg 2023), to synthesise celebrities’ voices (Kang et al. 2022; Nyce 2023), and for translation services for transnational fandoms (Kim 2021).
Despite the multiple ways in which AI is being introduced for practical implementations, the term remains a contested one. Lindley et al (2020) consider “how AI simultaneously refers to the grand vision of creating a machine with human-level general intelligence as well as describing a range of real technologies which are in widespread use today” (2) and suggest that this so called ‘definitional dualism’ can obscure the ubiquity of current implementations while stoking concerns about far-future speculations based on media portrayals. AI is touted as being at least as world-changing as the mass adoption of the internet and, regardless of whether it proves to be such a paradigm shift, the strong emotions it generates make it a productive site of intervention into long-held debates about: relationships between technology and art, what it means to create, what it means to be human, and the legislative and ethical frameworks that seek to determine these relationships.
This special issue seeks to address the rapidly accelerating topic of Artificial Intelligence and machine learning (ML) systems (including, but not limited to Generative Adversarial Networks (GANs), Large Language Models (LLMs), Robotic Process Automation (RPA) and speech, image and audio recognition and generation), and their relationship to and implications for fans and fan studies. We are interested in how fans are using AI tools in novel ways as well as how fans feel about the use of these tools. From media production and marketing perspectives we are interested in how AI tools are being used to study fans, and to create new media artefacts that attract fan attention. The use of AI to generate transformative works challenges ideas around creativity, originality and authorship (Clarke 2022; Miller 2019; Ploin et al. 2022), debates that are prevalent in fan studies and beyond. AI-generated transformative works may present challenges to existing legal frameworks, such as copyright, as well as to ethical frameworks and fan gift economy norms. For example, OpenAI scraped large swathes of the internet to train its models – most likely including fan works (Leishman 2022). This is in addition to larger issues with AI, such as the potential discrimination and bias that can arise from the use of ‘normalised’ (exclusionary) training data (Noble 2018). We are also interested in fan engagement with fictional or speculative AI in literature, media and culture.
We welcome contributions from scholars who are familiar with AI technologies as well as from scholars who seek to understand its repercussions for fans, fan works, fan communities and fan studies. We anticipate submissions from those working in disparate disciplines as well as interdisciplinary research that operates across multiple fields.
The following are some suggested topics that submissions might consider:
The use of generative AI by fans to create new forms of transformative work (for example, replicating actors’ voices to ‘read’ podfic)
Fan responses to the development and use of AI including Large Language Models (LLMs) such as ChatGPT (for example, concerns that AO3 may be part of the data scraped for training models)
Explorations of copyright, ownership and authorship in the age of AI-generated material and transformative works
Studies that examine fandoms centring on speculative AI and androids, (e.g. Her, Isaac Asimov, WestWorld, Star Trek)
Methods for fan studies research that use AI and ML
The use of AI in audience research and content development by media producers and studios
Lessons that scholars of AI and its development can learn from fan studies and vice versa
Ethics of AI in a fan context, for example deepfakes and the spread of misinformation 
Submission Guidelines
Transformative Works and Cultures (TWC, http://journal.transformativeworks.org/) is an international peer-reviewed online Gold Open Access publication of the nonprofit Organization for Transformative Works, copyrighted under a Creative Commons License. TWC aims to provide a publishing outlet that welcomes fan-related topics and promotes dialogue between academic and fan communities. TWC accommodates academic articles of varying scope as well as other forms, such as multimedia, that embrace the technical possibilities of the internet and test the limits of the genre of academic writing.
Submit final papers directly to Transformative Works and Cultures by January 1, 2024. 
Articles: Peer review. Maximum 8,000 words.
Symposium: Editorial review. Maximum 4,000 words.
Please visit TWC's website (https://journal.transformativeworks.org/) for complete submission guidelines, or email the TWC Editor ([email protected]).
Contact—Contact guest editors Suzanne Black and Naomi Jacobs with any questions before or after the due date at [email protected]
Due date—Jan 1, 2024, for March 2025 publication.
Works Cited
Andrews, Phoenix CS. 2022. ‘“Are Di Would of Loved It”: Reanimating Princess Diana through Dolls and AI’. Celebrity Studies 13 (4): 573–94. https://doi.org/10.1080/19392397.2022.2135087.
Cain, Sian. 2023. ‘“This Song Sucks”: Nick Cave Responds to ChatGPT Song Written in Style of Nick Cave’. The Guardian, 17 January 2023, sec. Music. https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave.
Clarke, Laurie. 2022. ‘When AI Can Make Art – What Does It Mean for Creativity?’ The Observer, 12 November 2022, sec. Technology. https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.
Contreras, Brian. 2022. ‘A.I. Is Here, and It’s Making Movies. Is Hollywood Ready?’ Los Angeles Times, 19 December 2022, sec. Company Town. https://www.latimes.com/entertainment-arts/business/story/2022-12-19/the-next-frontier-in-moviemaking-ai-edits.
———. 2023. ‘Is AI the Future of Hollywood? How the Hype Squares with Reality’. Los Angeles Times, 18 March 2023, sec. Company Town. https://www.latimes.com/entertainment-arts/business/story/2023-03-18/is-a-i-the-future-of-hollywood-hype-vs-reality-sxsw-tye-sheridan.
Kang, Eun Jeong, Haesoo Kim, Hyunwoo Kim, and Juho Kim. 2022. ‘When AI Meets the K-Pop Culture: A Case Study of Fans’ Perception of AI Private Call’. In . https://ai-cultures.github.io/papers/when_ai_meets_the_k_pop_cultur.pdf.
Kim, Judy Yae Young. 2021. ‘AI Translators and the International K-Pop Fandom on Twitter’. SLC Undergraduate Writing Contest 5. https://journals.lib.sfu.ca/index.php/slc-uwc/article/view/3823.
Lamerichs, Nicolle. 2018. ‘The next Wave in Participatory Culture: Mixing Human and Nonhuman Entities in Creative Practices and Fandom’. Transformative Works and Cultures 28. https://doi.org/10.3983/twc.2018.1501.
Leishman, Rachel. 2022. ‘Fanfiction Writers Scramble To Set Profiles to Private as Evidence Grows That AI Writing Is Using Their Stories’. The Mary Sue, 12 December 2022. https://www.themarysue.com/fanfiction-writers-scramble-to-set-profiles-to-private-as-evidence-grows-that-ai-writing-is-using-their-stories/.
Lindley, Joseph, Haider. Akmal, Franziska Pilling, and Paul Coulton. 2020. ‘Researching AI legibility through design’. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). https://doi.org/10.1145/3313831.3376792 
Miller, Arthur I. 2019. The Artist in the Machine: The World of AI-Powered Creativity. Cambridge, Massachusetts: MIT Press.
Noble, Safiya Umoja. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York, USA: New York University Press.
Nyce, Caroline Mimbs. 2023. ‘The Real Taylor Swift Would Never’. The Atlantic, 31 March 2023. https://www.theatlantic.com/technology/archive/2023/03/ai-taylor-swift-fan-generated-deepfakes-misinformation/673596/.
Ploin, Anne, Rebecca Eynon, Isis Hjorth, and Michael Osborne. 2022. ‘AI and the Arts: How Machine Learning Is Changing Artistic Work’. Report from the Creative Algorithmic Intelligence Research Project. University of Oxford, UK: Oxford Internet Institute. https://www.oii.ox.ac.uk/news-events/reports/ai-the-arts/.
Pringle, Ramona. 2017. ‘Watching You, Watching It: Disney Turns to AI to Track Filmgoers’ True Feelings about Its Films’. CBC, 4 August 2017. https://www.cbc.ca/news/science/disney-ai-real-time-tracking-fvae-1.4233063.
Rosenberg, Allegra. 2023. ‘Custom AI Chatbots Are Quietly Becoming the next Big Thing in Fandom’. The Verge, 13 March 2023. https://www.theverge.com/23627402/character-ai-fandom-chat-bots-fanfiction-role-playing.
shealwaysreads. 2023. “Fascinating to see…” Tumblr, March 28, 2023, 11:53. https://www.tumblr.com/shealwaysreads/713032516941021184/fascinating-to-see-a-take-on-a-post-about-thehttps://www.tumblr.com/androidsfighting/713056705673592832?source=share
13 notes · View notes
taqato-alim · 6 months
Text
Extrapolation of the potential effects of generative AI based on the effects of the invention of the printing press
Here are some of the major historical events closely related to the invention of the printing press:
Gutenberg Bible (1455): Considered the earliest surviving book printed with movable metal type in Europe. This was Johannes Gutenberg's magnum opus and demonstrated the viability of printing. It helped spread the printing press technology across Europe rapidly.
Spread of Humanism (15th century): The rise of humanism emphasized classical learning and education of the population. This created demand for books which fueled the growth of printing. Works of scholars like Erasmus were widely printed and disseminated.
Protestant Reformation (16th century): Martin Luther effectively used the printing press to mass produce and distribute his 95 Theses and other writings criticizing the Catholic church. This helped spark the Protestant Reformation movement by disseminating ideas to a wider audience.
Decline of scriptoria (15th century): As the printing press became dominant, it replaced handwritten manuscript production in scriptoria attached to monasteries. This was a major cultural shift from manuscript to print culture.
Vernacular literature (15th-16th century): The printing press enabled literature to be published in local languages rather than just Latin, making it accessible to the general populace and helping establish national identities and cultures.
Scientific revolution (16th-17th century): New scientific ideas could be widely shared through printing, accelerating processes of data collection, experimentation and debate. This was instrumental to the scientific revolution.
In summary, the printing press was a key driver of the dissemination of ideas during major social, religious and intellectual changes in the early modern period in Europe. It helped enable the spread of humanism, Reformation, rise of vernacular languages and acceleration of scientific progress.
Here is an extrapolation of the potential effects of generative AI based on the effects of the invention of the printing press:
Democratization of content creation: Generative AI tools may allow more people to easily generate all kinds of creative works like images, videos, writing, music etc. This could mirror how printing expanded authorship.
Accelerated spread of ideas: AI-generated content could propagate new concepts rapidly online, just as printing disseminated humanist texts and revolutionary writings more broadly.
Shift from scarcity to abundance: Generating AI may replace scarce, costly manual production with abundant, cheap automated creation like printing replaced hand-copied manuscripts. This could impact creative industries.
Empowerment of grassroots movements: Citizen-led causes may leverage AI tools to amplify messages through generated visuals/narratives online, paralleling how printing aided reformers like Luther.
Rise of AI-generated literature: Entire books, stories, poems could be algorithmically written, analogous to printed vernacular texts establishing new cultural forms.
Democratization of knowledge: Open-source generative models may make specialized expertise like science/medicine/law more accessible to all through synthesized content.
Accelerated scientific progress: AI models generate hypothesis, analyze data at vast scales, freeing up researchers to confirm/falsify ideas faster through collaborative online science like printing sped up process.
Changes to intellectual property: Widespread AI generation may challenge existing models of ownership over creative works as printing did for copying manuscripts.
Of course, there are also risks such as misuse, bias, and economic disruption to consider with generative AI that echo concerns raised historically over printing technologies.
Overall impacts will depend on how generative tools are developed and governed.
r56CCGBPsF1s1g1lQ5PX
2 notes · View notes
Text
resurrection of the author
As ChatGPT and the like are embedded into various software packages and “productivity suites,” it may be deployed as a kind of on-demand textuality, providing on the erstwhile blank page a plausible version of the expected thing suited to (and conditioned by) a particular situation that can be distilled into an efficacious prompt. 
In other words, ready confirmation that there really is “nothing outside the text,” that there is always a ready-at-hand textual equivalent for whatever scenarios crop up in the course of ordinary business. AI models make the textuality we always already swim in more directly manipulatable; it offers a means to inspect the code, so to speak, to render explicit the underlying banalities that characterize the ordinary understanding of a particular condition or situation or task. Rather than having to mystically draw from that ambient field of platitudes when we have some mundane writing task to perform, AI models can just disgorge them as required. (Won’t this mean that more tasks will pivot to video and demand visual evidence of human presence and performance?) 
AI models aspire to be operable models of the discursive field of language that poststructuralist theorists used to argue that we all exist within — that subjectivity emerges from language use, rather than language merely being something that a pre-constituted and centered subject puts to use. I think something similar might be claimed for our imminent use of generative AI as a means of expedited expression, that it will seem like a tool that we use to express ourselves, but in practice it will be a means by which we articulate ourselves as subjects. If that becomes the case — if AI becomes an augmented way of wielding language, with all its biases and connections and associations condensed into an immediately accessible and seemingly navigable terrain — Its capabilities will become the contours of our subjectivity.
In a 1969 talk, “What Is an Author?” Foucault examines the variable relation between a text and its writer, and the historical nature of the bias that overrates the significance of the “author function” (itself “a complex operation whose purpose is to construct the rational entity we cali an author”) over other factors determining a text’s existence and implications.  We need the idea of an author, Foucault suggests, to be able to construe a text as bearing “creativity” or ‘inspiration” or “profundity” or “unity” — or to allow us preoccupy ourselves with matters of authentication (”Who really wrote this?”). 
“We can easily imagine a culture where discourse would circulate without any need for an author,” Foucault claims. “Discourses, whatever their status, form, or value, and regardless of our manner of handling them, would unfold in a pervasive anonymity.” With AI threatening to marginalize or radically redefine “authorship” in ways far more direct and pervasive than the ones Foucault was confronting in 1969, this seems more possible than ever. Foucault’s demand for different sorts of questions seems accordingly even more pertinent:
suspicions arise concerning the absolute nature and creative role of the subject. But the subject should not be entirely abandoned. It should be reconsidered, not to restore the theme of an originating subject, but to seize its functions, its intervention in discourse, and its system of dependencies. We should suspend the epical questions: how does a free subject penetrate the density of things and endow them with meaning; how does it accomplish its design by animating the rules of discourse from within?
Rather, we should ask: under what conditions and through what forms can an entity like the subject appear in the order of discourse; what position does it occupy;  what functions does it exhibit; and what rules does it follow in each type of discourse?
In the revised version of the talk, Foucault suggests that the idea of an “author” has become “a certain functional principle by which, in our culture, one limits, excludes, and chooses; in short by which one impedes the free circulation, the free manipulation, the free composition, decomposition, and recomposition of fiction ... the ideological figure by which one marks the manner in which we fear the proliferation of meaning.” Generative AI (though not “free”) is intended to exponentially expand the circulation, manipulation, composition, decomposition of texts. It will intensify our need for an “author” to impede that proliferation. Regardless of how texts are produced, their authors will likely become even more occulted figures. 
4 notes · View notes
iam-the-egg-boi · 2 years
Note
In response to your Paul rewriting history ask:
I think Paul does engage in a little bit of revisionism (not necessarily lying, but just portraying things as generally more positive than they were). But the examples you mention are wrong or misleading I think!
The biographer who said Paul won’t tell you about the Beatles history is Philip Norman, author of the book Shout! (1981). While this is a very popular Beatles bio, it is also very well known that it is biased against Paul, dismissing his contributions in order to promote John Lennon. Here’s a quote from the book: “John Lennon was three quarters of the Beatles.” Norman was so criticized for his anti-Paul bias that he actually ended up writing a bio of Paul in 2016 partly to restore his credibility as a biographer, admitting his past mistakes. So I’d take what he said about Paul in 1987 with a grain of salt.
As for Blackbird, Paul talked about writing it in response to the civil rights movement in November 1968! In private too, so there’s an extra layer of credibility since he wasn’t just saying what he thought sounded nice for an interview. I think he has given some conflicting accounts in interviews actually, but there is a recording of him mentioning the civil rights origin to Donovan in the recording sessions for Mary Hopkin’s album “Post Card.” Go to the timestamp 3:54 on this video for the clip: https://youtu.be/VLUFLxKpDCo
For “In My Life,” Paul seems to genuinely think he wrote the melody for that one and was actually frustrated with for forgetting his contribution to it (John said Paul only helped with the middle eight). He mentions his frustration about this in an “off the record” phone call with Hunter Davies in 1981 and said that it was a song that he and John disputed in the Playboy Interview in 1984. There’s no way to know who is truly misremembering (I’m inclined to think Paul wrote less of it then he remembers), but I don’t think Paul’s lying about thinking he wrote the melody. And John did the same thing, claiming he wrote most of the lyrics to Eleanor Rigby when by all other accounts he contributed a few lines. I can’t think of any other examples of songs where they significantly disagree on the authorship.
Sorry for the long response! :)
No worries thank u pal!
Love when people know what they’re talking about I just chat shit!! Good job some people do their research unlik moi
Here u go Paul fans
I will continue to say Paul is a liar bcos it is funny to ME 🧍🏼
4 notes · View notes
Text
Current Trends: Electrical AI Blogging Solutions
Tumblr media
In the era of technological marvels and digital innovation, the emergence of the Electrical AI Blog Writer marks a pivotal milestone in the evolution of content creation. This groundbreaking advancement harnesses the power of artificial intelligence (AI) and electrical engineering principles to craft engaging and informative blog posts, revolutionizing the landscape of online communication. The genesis of the Electrical AI Blog Writer stems from the convergence of AI algorithms and electrical engineering concepts. Through sophisticated machine learning techniques, coupled with the understanding of electrical systems, this innovative tool transcends conventional content generation methods. By amalgamating the prowess of AI with the intricacies of electrical engineering, the Electrical AI Blog Writer possesses the capability to comprehend complex topics, analyze data trends, and articulate insightful narratives with unparalleled precision and efficiency. One of the most compelling aspects of the Electrical AI Blog Writer is its adaptability across diverse domains. Whether delving into the realms of technology, science, business, or culture, this versatile tool demonstrates remarkable versatility in generating content tailored to specific audiences and niches. Its ability to synthesize vast amounts of information and distill key insights into cohesive blog posts empowers businesses, educators, and enthusiasts alike to disseminate knowledge effectively in the digital sphere. Furthermore, the Electrical AI Blog Writer embodies the spirit of innovation by continually evolving and learning from its interactions. Through iterative processes of data analysis and algorithm refinement, it adapts to the dynamic landscape of online discourse, ensuring relevance and accuracy in its output. This iterative learning process not only enhances the quality of content generated but also underscores the transformative potential of AI-driven technologies in shaping the future of communication. Beyond its technical capabilities, the Electrical AI Blog Writer catalyzes a paradigm shift in the way we perceive creativity and authorship. While traditional notions of writing may emphasize human ingenuity and expression, the advent of AI-driven content generation challenges these preconceptions by demonstrating the capacity of machines to emulate and even surpass human cognitive functions. This raises profound questions about the nature of creativity, authorship, and the role of technology in shaping our cultural landscape. However, amidst the excitement surrounding the Electrical AI Blog Writer, ethical considerations loom large. As AI assumes a more prominent role in content creation, concerns regarding algorithmic bias, data privacy, and intellectual property rights come to the forefront. Safeguarding against unintended consequences and ensuring transparency in the development and deployment of AI-powered tools becomes imperative to foster trust and accountability in the digital ecosystem. In conclusion, the advent of the Electrical AI Blog Writer heralds a new era of content creation, underpinned by the fusion of AI and electrical engineering principles. Its ability to generate insightful, engaging, and contextually relevant blog posts underscores the transformative potential of technology in reshaping communication dynamics. As we navigate the complexities of this digital frontier, it is essential to embrace innovation responsibly, leveraging the power of AI to amplify human creativity and knowledge dissemination for the betterment of society.
1 note · View note
aiwriterfilm11 · 1 month
Text
Improve Your Scriptwriting Efficiency with AI Writer for Film
Tumblr media
In the ever-evolving landscape of filmmaking, technology continues to shape the creative process, introducing innovative tools that revolutionize storytelling. Among these tools stands the AI Writer for Film— a digital artisan’s assistant that augments the creative journey of filmmakers, offering a blend of efficiency, creativity, and ingenuity. In this essay, we delve into the profound impact of AI Writers on the world of film, exploring their capabilities, challenges, and the symbiotic relationship they foster between human imagination and artificial intelligence. At its core, an AI Writer for film is a sophisticated algorithm trained on vast datasets of screenplays, film scripts, and storytelling principles. Leveraging natural language processing (NLP) and machine learning techniques, these AI systems analyze patterns, tropes, and narrative structures to generate compelling storylines, dialogues, and character arcs. By understanding the nuances of storytelling, AI Writers assist filmmakers in brainstorming ideas, refining plot points, and overcoming creative blocks. One of the most remarkable features of AI Writers is their ability to adapt to diverse genres, styles, and audience preferences. Whether it’s crafting a gripping thriller, a heartwarming romance, or a mind-bending science fiction epic, AI Writers excel in tailoring narratives to meet the unique demands of each project. Moreover, they offer valuable insights into market trends and audience engagement metrics, enabling filmmakers to make data-driven decisions and maximize the commercial viability of their films. However, the integration of AI Writers into the filmmaking process is not without its challenges. Critics often raise concerns about the authenticity and originality of AI-generated content, fearing a homogenization of storytelling and a dilution of human creativity. While AI can mimic established patterns and conventions, it may struggle to capture the ineffable essence of human experience — the subtleties of emotion, the complexities of interpersonal relationships, and the richness of cultural context. Thus, filmmakers must strike a delicate balance between harnessing AI’s capabilities and preserving the distinctiveness of their artistic vision. Furthermore, ethical considerations loom large in the realm of AI-assisted creativity. Questions about intellectual property rights, authorship attribution, and algorithmic bias demand careful deliberation. As AI becomes increasingly proficient at generating content, the boundaries between human-authored and AI-generated works blur, prompting a reevaluation of traditional notions of authorship and creativity ownership. Filmmakers must navigate this ethical terrain with prudence, ensuring transparency, accountability, and respect for creative contributions, both human and machine. Despite these challenges, the collaboration between human filmmakers and AI Writers holds immense promise for the future of cinema. By harnessing the complementary strengths of human intuition and machine intelligence, filmmakers can unlock new realms of storytelling possibilities, pushing the boundaries of imagination and innovation. AI Writers serve not as substitutes for human creativity, but as catalysts for inspiration, companions in the creative journey, and champions of storytelling diversity. In conclusion, the emergence of AI Writers marks a transformative chapter in the evolution of filmmaking — an era defined by the fusion of artistry and artificial intelligence. As filmmakers embrace these technological marvels with caution and creativity, they pave the way for a cinematic landscape that is richer, more inclusive, and more vibrant than ever before. In the tapestry of storytelling, AI Writers emerge as indispensable collaborators, weaving threads of innovation into the fabric of cinematic imagination.
1 note · View note
bittublog123 · 1 month
Text
Exploring the Intersection of Technology and Humanistic Studies: A New Frontier for Social Sciences.
In the contemporary landscape, the convergence of technology and humanistic studies has emerged as a compelling frontier, reshaping the paradigms of social sciences. Geeta University stands at the forefront of this transformative journey, spearheading interdisciplinary inquiry into the intricate relationship between technology and humanity. This fusion opens avenues for innovative research, enriching our understanding of societal dynamics, cultural phenomena, and individual experiences. By bridging the realms of technology and humanistic studies, Geeta University embarks on a quest to unravel the complexities of our digitally mediated world.
At the heart of this interdisciplinary exploration lies the recognition that technology is not a mere tool but a potent force shaping human behavior, interactions, and societal structures. Through a humanistic lens, scholars at Geeta University delve into the ethical, cultural, and philosophical implications of technological advancements. By interrogating the impact of algorithms, artificial intelligence, and digital platforms on human agency and identity, researchers illuminate the nuanced interplay between technology and society. Best University of India.
One prominent area of inquiry is the intersection of digital media and cultural studies. Geeta University scholars investigate how digital platforms shape cultural production, consumption, and representation. From analyzing the influence of social media on collective memory to exploring the democratization of storytelling through digital archives, researchers navigate the evolving landscapes of cultural expression in the digital age. By examining the tensions between globalization and cultural diversity in digital spaces, Geeta University fosters critical dialogues on cultural identity and representation.
Moreover, the integration of technology into the fabric of everyday life necessitates a reevaluation of fundamental concepts such as privacy, autonomy, and social justice. Geeta University pioneers research that examines the ethical dilemmas arising from ubiquitous surveillance, data commodification, and algorithmic bias. By foregrounding human values in technological design and implementation, scholars envision more inclusive and equitable digital futures. Through interdisciplinary collaborations, Geeta University fosters a holistic approach to ethical inquiry, transcending disciplinary boundaries to address the complex ethical challenges of our technologically mediated world.
Furthermore, the digital age has ushered in new forms of narrative construction and interpretation, prompting scholars to explore the intersections of technology and narrative studies. Geeta University researchers investigate how digital storytelling platforms redefine narrative structures, audience engagement, and authorship. By examining the role of algorithms in shaping personalized narratives and immersive storytelling experiences, scholars unravel the evolving dynamics of narrative in the digital era. Through experimental storytelling practices and digital humanities methodologies, Geeta University pioneers innovative approaches to understanding the complex interplay between technology, narrative, and human experience. Top University of India.
In conclusion, the intersection of technology and humanistic studies represents a fertile ground for interdisciplinary inquiry, offering new perspectives on the societal, cultural, and ethical implications of technological advancements. Geeta University's commitment to bridging these disciplines facilitates a deeper understanding of the complexities of our digital age. By fostering collaborative research and innovative scholarship, Geeta University is poised to shape the future of social sciences and pave the way for a more nuanced understanding of technology's impact on humanity.
0 notes
justpostsyeet · 1 year
Text
Continuing with my winners write the history and authorship bias
It's okay for Melian to enchant Thingol but when Eöl does it with Aredhel it's wrong. Noldors are always right and noble while outsiders are uncouth and primitive, savages who rapes and deceives people?
I have a full blown analysis(?) over it and I'll expand upon this later.
23 notes · View notes
bunnmckee94 · 2 months
Text
Artificial Intelligence In Healthcare: Recent Purposes And Developments Springerlink
Despite greatest intentions, such selections sometimes lead to suboptimal care because of the complexity of patient care, the increasing duties of healthcare providers, or simply due to human error. The clinical decision-making process is commonly strictly based on normal guidelines and protocols that satisfy safety and accountability requirements. However, deviation from established protocols in complicated care environments may be beneficial for the patient to adapt treatments for a extra personalized regimen. In such dynamic settings, ML strategies may be valuable tools for optimizing affected person care outcomes in a data-driven method, especially in acute care settings. The San Francisco–based company Enlitic develops deep learning medical tools to enhance radiology diagnoses by analyzing medical information. In some instances, these instruments can substitute the need for tissue samples with “virtual biopsies,” which would aid clinicians in figuring out the phenotypes and genetic properties of tumors. The literature was primarily targeted on the ethics of AI in health care, significantly on carer robots, diagnostics, and precision medicine, however was largely silent on ethics of AI in public and inhabitants health. The literature highlighted numerous common ethical concerns associated to privateness, belief, accountability and responsibility, and bias. Largely lacking from the literature was the ethics of AI in global health, particularly within the context of low- and middle-income nations (LMICs). The authors declare that they don't have any potential conflicts of interest with respect to the analysis, authorship, and publication of this article. Data analysis was performed by LP, IL, JMN, PN, MN and PS after which mentioned with all authors. JR and DT supplied important revision of the paper by way of necessary mental content material. I think that automatization via AI could be a protected method and it would be excellent for the primary care companies. ai in healthcare industry It is further stored in the cloud and constant monitoring is finished to avoid complications and readmissions to the hospitals. In addition, AI additionally helps the healthcare system in diagnosis and therapy purposes, affected person engagement and adherence, and administrative functions [6]. AI not only simplifies the work of doctors, nurses, and different healthcare staff but in addition saves an ample amount of time. Thus the adoption of digital options for the prevention, diagnosis, and treatment of varied illnesses is the clever route for India to deal with the goal of offering health for all. The U.S. health care system is underneath strain from an aging population; rising illness prevalence, together with from the present pandemic; and rising prices. New technologies, similar to AI, could augment patient care in health care facilities, including outpatient and inpatient care, emergency services, and preventative care. However, the use of AI-enabled instruments in health care raises a wide range of ethical, legal, economic, and social concerns. Because of them, we're unlikely to see substantial change in healthcare employment as a outcome of AI over the following 20 years or so. However, despite these efforts from many countries, no nation has been capable of systematically resolve the privateness points concerning health care knowledge. The company’s centralized, cloud-based platform powers biopharma firms, life science organizations, healthcare suppliers and academic medical facilities, serving to them identify, curate and prepare medical imaging information to speed up time to perception. The findings counsel that incorporating socioeconomic components into predictive models can improve their accuracy and effectiveness. These fashions have the potential to find sensible utility in clinical settings for figuring out people susceptible to tooth loss, enabling health care professionals to prioritize preventive interventions. Numerous analysis investigations focusing on cervical most cancers and cervical intraepithelial neoplasia (CIN) have documented the applying of AI. The main areas where AI has been employed embrace the evaluation of colposcopy, MR imaging (MRI), CT scans, cytology, and data related to human papillomavirus (HPV) [90]. A part of this hesitation is the necessity for any expertise to be examined earlier than it can be trusted. But there might be additionally the romanticized notion of the diagnostician whose mind accommodates more than any textbook. Powell joined NVIDIA in 2008 with accountability for establishing NVIDIA GPUs because the accelerator platform for medical imaging devices. She spent her early career in engineering and product management of diagnostic display methods at Planar Systems. The integration of artificial intelligence with the healthcare industry is made attainable because of consolidation and sourcing within the cloud. Although there are increasingly extra regulatory guidelines out there, similar to these developed by the World Health Organization [54] and the European Union [55], the use of AI in health care remains debatable because of the challenges in ensuring information privateness and proper information use [56]. This is very true when data collection modes are conducted via third-party apps, such as Facebook Messenger (Meta Platforms), of which privacy insurance policies are ruled by technology corporations and never health care establishments [24]. Moreover, though there are privateness and security precautionary measures, the increasing stories of information leaks and vulnerabilities in digital medical report databases erode inhabitants belief. Future security and transparency measures may consider the usage of blockchain technology, and privateness laws ought to be correctly delineated and clear [57]. In qualitative analysis, the concepts credibility, dependability, and transferability are used to explain different aspects of trustworthiness [72]. In order to create a greater prediction, high-quality, continuous knowledge from multiple domains are required. Also, advancements in health knowledge processing, biosensors, genomics, and proteomics will assist present a complete set of data that will enable perioperative intelligence (19). Incorporating intraoperative knowledge for early detection of issues or medical aberrations could also stop inflammatory reactions that exacerbate the harm or high-risk interventions that may result in iatrogenic injuries. By merging present finest practices for moral inclusivity, software program development, implementation science, and human-computer interplay, the AI community may have the chance to create an integrated finest apply framework for implementation and maintenance [116]. Additionally, a collaboration between multiple health care settings is required to share information and guarantee its quality, in addition to verify analyzed outcomes which shall be critical to the success of AI in clinical follow. Medical colleges are inspired to include AI-related subjects into their medical curricula. A study carried out amongst radiology residents showed that 86% of scholars agreed that AI would change and enhance their follow, and as a lot as 71% felt that AI ought to be taught at medical colleges for higher understanding and application [118]. This integration ensures that future healthcare professionals obtain foundational data about AI and its purposes from the early levels of their training.
1 note · View note
generatorsblog · 2 months
Text
Seamless Video Magic: Ai Galician Video Generator Online for Free - Simplified
Ai Galician Video Generator
In the bustling realm of digital content creation, innovation knows no bounds. Among the latest breakthroughs stands the AI Galician Video Generator, a pioneering tool that marries artificial intelligence with the rich linguistic heritage of Galician culture. This revolutionary system empowers creators to generate compelling video content seamlessly, harnessing the power of AI to construct narratives, elucidate concepts, and entertain audiences in the Galician language.
At its core, the AI Galician Video Generator is fueled by cutting-edge natural language processing algorithms, honed through meticulous training on vast repositories of Galician text. Through this process, the AI not only comprehends the nuances of Galician syntax and semantics but also develops a deep understanding of Galician culture, history, and societal norms.
The system operates through a user-friendly interface, where creators input their desired themes, topics, or keywords. Drawing upon its extensive linguistic database, the AI Galician Video Generator swiftly crafts scripts, dialogues, and storyboards tailored to the user's specifications. Whether the goal is to produce educational content, promotional materials, or captivating narratives, the possibilities are virtually limitless.
One of the most remarkable features of the AI Galician Video Generator is its ability to adapt to diverse content genres. From instructional videos elucidating traditional Galician recipes to immersive documentaries exploring the region's folklore and traditions, the AI seamlessly navigates various thematic landscapes with finesse. Furthermore, it can generate content suitable for different platforms, including social media, streaming services, and educational portals, catering to a wide spectrum of audience preferences.
In addition to its versatility, the AI Galician Video Generator boasts unparalleled efficiency. By automating the content creation process, it significantly reduces the time and resources required to produce high-quality videos. Creators no longer grapple with writer's block or laborious scriptwriting; instead, they can focus their energies on refining the visual aspects of their projects, confident in the AI's ability to furnish them with engaging narratives.
Moreover, the AI Galician Video Generator fosters inclusivity by democratizing content creation. Historically, linguistic barriers have limited the accessibility of digital content to speakers of dominant languages. However, by offering a sophisticated tool for Galician speakers, this innovation empowers creators from Galicia and beyond to amplify their voices and share their stories with global audiences.
Furthermore, the AI Galician Video Generator serves as a catalyst for cultural preservation and revitalization. In an increasingly interconnected world, indigenous languages and cultural heritage face the looming threat of marginalization. By harnessing AI technology to produce Galician-language content, this platform contributes to the preservation and promotion of Galician culture, ensuring its enduring legacy in the digital landscape.
As with any technological innovation, ethical considerations abound. While the AI Galician Video Generator streamlines content creation, it also raises questions about authenticity and authorship. As creators leverage AI-generated scripts and storylines, they must remain vigilant in maintaining transparency and acknowledging the contributions of the underlying AI system.
Furthermore, there is a pressing need to mitigate the risk of algorithmic bias, ensuring that the AI Galician Video Generator reflects the diversity and complexity of Galician society accurately. By incorporating diverse perspectives and voices into its training data and algorithmic decision-making processes, developers can mitigate the propagation of stereotypes or misrepresentations in AI-generated content.
Looking ahead, the AI Galician Video Generator holds immense potential for continued innovation and growth. As AI technologies evolve and linguistic datasets expand, the platform will undoubtedly become more adept at capturing the intricacies of Galician language and culture. Moreover, collaborations between AI researchers, linguists, and content creators can foster interdisciplinary dialogue and drive further advancements in the field of AI-driven content generation.
In conclusion, the AI Galician Video Generator represents a paradigm shift in content creation, harnessing AI technology to amplify Galician voices and narratives on the global stage. By combining linguistic expertise with computational prowess, this groundbreaking platform opens new avenues for creativity, expression, and cultural exchange. As creators embrace this transformative tool, they embark on a journey to redefine the boundaries of storytelling and shape the future of digital media in the Galician language.
Tumblr media
1 note · View note
influencermagazineuk · 4 months
Text
AI-Driven SCM Transformation: Exploration of Supercharged Logistics and Cutting-edge Innovations with Industry Insights From Ketan Rathor
Tumblr media
For decades, supply chain management (SCM) has operated in the shadows, ensuring the smooth flow of goods from creation to consumption. However, in today's hyper-connected, data-driven world, the traditional supply chain is undergoing a metamorphosis, transforming into a dynamic, intelligent organism steered by the twin superpowers of Artificial Intelligence (AI) and Machine Learning (ML). These cutting-edge technologies are injecting agility, resilience, and unprecedented optimization into the once-stodgy realm of logistics. Accurately predicting demand has long been the Achilles' heel of SCM. Enter AI-powered forecasting models that analyze historical data, market trends, social media buzz, and even weather patterns to paint a hyper-realistic picture of future needs. This granular precision enables businesses to optimize inventory levels, avoid costly stockouts, and anticipate sudden demand surges. Imagine predicting the next athleisure craze based on Instagram trending hashtags or preparing for a winter sports equipment boom due to an unseasonal snowfall forecast – that's the power of AI-driven demand forecasting. Warehouses, once static storage hubs, are transforming into bustling hives of intelligent activity thanks to AI-powered robots and automated systems. These robots, guided by ML algorithms, can navigate complex layouts, optimize picking routes, and pack orders with astonishing speed and accuracy. Picture a scenario where an order lands, analyzed by AI, and then seamlessly navigated through the warehouse by robotic arms, picked, packed, and dispatched – all without human intervention. This not only reduces labor costs but also minimizes errors and increases order fulfillment speed, leading to happier customers and improved bottom lines. Equipment breakdowns have the potential to cripple supply chains. Predictive maintenance, powered by AI and ML, makes it possible to predict and prevent breakdowns before they happen. Sensors embedded in equipment gather real-time data on performance, vibrations, and temperature, feeding it into ML algorithms that identify anomalies and predict potential failures. This allows for proactive maintenance, preventing costly downtime and ensuring a smooth production flow. Imagine a scenario where an AI system alerts technicians to an impending machinery malfunction, allowing them to fix it before it disrupts production – that's the power of predictive maintenance. Coordinating diverse transportation modes across continents used to be a logistical symphony that could easily go out of tune. AI and ML are now harmonizing this complex dance. Algorithms can analyze real-time traffic conditions, weather patterns, and carrier performance to optimize routes, identify the most efficient modes of transport, and even negotiate the best rates. This orchestration of logistics leads to faster delivery times, reduced transportation costs, and an improved carbon footprint – a win-win for businesses, customers, and the environment. In the context of this transformative landscape, special thanks are extended to Ketan Rathor for sharing valuable insights. With an impressive 23 years of experience in Computer Science and IT Leadership, Ketan's diverse skill set, global impact, and commitment to innovation are evident. His expertise with a prolific authorship and multiple patents, solidify Ketan Rathor as a distinguished leader shaping the future of AI-driven SCM. Ketan Rathor Ketan Shares, “While AI and ML are revolutionizing SCM, ethical considerations cannot be ignored. Concerns about bias in algorithms, data privacy, and job displacement need to be addressed responsibly. Ensuring transparency in how algorithms work, using diverse datasets to train them, and upskilling existing workforces are crucial steps toward harnessing the power of AI for good. The integration of AI and ML is not just a trend; it's a tectonic shift reshaping the entire landscape of SCM. Companies that embrace these technologies will gain a competitive edge, while those clinging to outdated methods risk falling behind. The future belongs to supercharged supply chains – agile and intelligent organisms navigating the complexities of the global market with unparalleled efficiency and resilience. And the key to unlocking this future lies in harnessing the transformative power of AI and Machine Learning. Also, Industry 4.0 software tools play a pivotal role in reshaping and enhancing sustainable supply chain management practices in emerging markets. As these markets undergo rapid development, leveraging advanced technologies becomes crucial for addressing the challenges associated with logistics, production, and distribution. Industry 4.0 tools, encompassing technologies like Internet of Things (IoT), Artificial Intelligence (AI), and data analytics, empower businesses to optimize resource utilization, improve operational efficiency, and reduce environmental impact. Through real-time monitoring, predictive analytics, and smart decision-making capabilities, these tools enable companies to create agile and eco-friendly supply chains, ensuring a harmonious balance between economic growth and environmental sustainability in the dynamic landscape of emerging markets. In the pharmaceutical sector, our research introduces an innovative Archimedes Optimization with Enhanced Deep Learning based Recommendation System (AOAEDL-RS), subtly detailed in a previous study. This groundbreaking technique utilizes sentiment analysis on customer reviews, employing preprocessing, context-based BiLSTM-CNN (CBLSTM-CNN) model classification and optimal hyperparameter tuning through Archimedes Optimization. Simultaneously, another study. explores broader supply chain intricacies, emphasizing the importance of efficient practices and the integration of tree structures, notably decision trees, for managing e-commerce shipment content. As the SCM landscape undergoes transformative shifts, the seamless integration of AI and ML technologies ensures a future where supercharged supply chains navigate global complexities with unprecedented efficiency and resilience. The key to unlocking this future lies in harnessing the transformative power of AI and Machine Learning.” Read the full article
0 notes
usefulfictions · 5 months
Text
How can Digital Humanities make our assumptions about the world explicit? [Week 4/Simulations]
Writing from:
October 2023
A presentation from a classmate about simulations in digital humanities, featuring the Oregon Trail 
An article by Shawn Graham, “Behaviour Space: Simulating Roman Social Life and Civil Violence.” (2009)
A presentation from Deborah Poff, “Misconduct and Publication Ethics: From Ordinary Forms of Discrimination to the Ethically Opaque World of AI.” (2023)
Weird: A reference to Joshua Epstein that Graham uses in his article I thought was one particularly relevant to my own interests and concerns with the digital, as Graham writes, “Epstein argues [...] the principal strength of agent-based modelling [is that] it requires modellers to make explicit their assumptions about how the world operates.” n a recent colloquium hosted by the philosophy department, the speaker, Deborah Poff, raised concerns about ChatGPT in the context of academic publishing and editing. Of particular concern for her was the inability of ChatGPT and similar technology to satisfy the requirements of ‘authorship’ even when it might very well be the writer of an essay or book. The definition that Poff used to contextualize what authorship meant insisted on the ability of someone to maintain responsibility for the work they are producing, both in the sense of being praised and being blamed for ideas that might be contained within. While someone can certainly say that ChatGPT is responsible for an essay that contains bias or some form of social injustice, it is harder to claim that ChatGPT itself holds itself responsible for what it produces. 
Wonderful: Perhaps not wonderful this week in the positive sense but wonderful in the curious sense, during Poff’s presentation, though I understand the seeming urgency of ChatGPT as a problem for academic publishing, most of my thoughts were oriented towards the reality that ChatGPT introduces very few new issues to academic publishing. There are still ghostwriters and publishing mills made specifically for people to take the writing of others and make it their own. But I think the idea of Epstein’s, that ABM forces modellers to make their assumptions explicit, speaks to the main issue at hand with many of the complaints I have seen about any technological advancement that threatens change to the way academia works. Namely, the issue of responsibility for the biases that writing contains. Though countless authors claim responsibility for their ideas but do nothing to confront the biases or injustices their writing represents, there is still an ability to locate their sources and parse out the development of ideas that reflect implicit assumptions they may have made. With something like ChatGPT, its nature as a black box means that though we might take issue with the result it produces, it can be much harder to find the implicit assumptions it makes or target where those assumptions emerge from.
I think this speaks to the power of things like ABM and the necessity of digital humanities to try and focus on tools that require the creator or user to interact with them in ways that necessitate that they make their assumptions explicit, both to encourage the individual to confront what may be their own biases, as well as make clear to other people who encounter their work what considerations they perhaps ought to make given what they know about the assumptions that have been made.
Worrying: Despite the ability of the digital to allow us to make our implicit assumptions more explicit in the case of ABMs, I do still worry about the way that many people tend to treat digital products and mechanisms as ones free of bias and thus more reliable than human work. This is not at all to say that technology is in some way inferior, only that it is always inextricably linked to the humans who use, create, and challenge it - inclusive of the biases and assumptions those humans have. To view the digital/technology as more objective than humans seems to lead to (and has certainly led) to a lack of responsibility for or even an examination of how different digital tools and technology reify existing biases, particularly ones that confirm existing social injustice.
0 notes