Blog | Chadams.meThoughts about technology, culture, and learning.2023-09-27T16:32:25Zhttps://chadams.me/Chelsea Adamschelsea@chadams.meMaking the Internet More Useful2023-09-09T12:00:00Zhttps://chadams.me/blog/2023-09-09-link-roundup/
<p>The United States’ Office of Educational Technology has released a new policy report entitled <em><a href="https://tech.ed.gov/ai-future-of-teaching-and-learning/">Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations</a></em>. The report offers high-level definitions and policy suggestions aimed at public school educators and administrators and examines the strengths of AI as “pattern detection” technology. It also makes the case for an AI-driven future in which new technologies could help differentiate instruction, enable new forms of classroom interaction, and support educators in their jobs through various methods of automation. It also discusses risks, both known and unknown, associated with AI and suggests that educators need to become much more involved in conversations concerning the creation, use, and governance of AI and LLMs. Though it lacks detail, the document is a useful primer for educators; if anything, it paints an overly-rosy picture of the potential of currently-available ed tech solutions that employ AI. Artificial intelligence is an evolving chapter in a story written against the backdrop of a zeitgeist that encourages leaping before it looking. Anyone who claims to have a firm grasp on all of the ways in which AI may impact us (positively and negatively) is either a savant or a smoke-blower.</p>
<p>Speaking of blowing smoke, <a href="https://arstechnica.com/tech-policy/2023/08/fcc-says-too-bad-to-isps-complaining-that-listing-every-fee-is-too-hard/">the FCC definitively squashed efforts by the U.S.’s major Internet service providers to roll back new transparency measures for broadband fee disclosure</a>. Last year, <a href="https://www.fcc.gov/document/fcc-requires-broadband-providers-display-labels-help-consumers-0">the FCC announced the new rules</a>, which require ISPs to display “nutrition labels” disclosing features, fees, discounts, data caps, and other broadband plan quirks (and gotchas) in a prominent and easy-to-read format. The first major proposal for broadband “nutrition labels” was made all the way back <a href="https://static.newamerica.org/attachments/4508-broadband-truth-in-labeling-2/Broadband%20Truth-in-Labeling%202015.c9ecf56cc29149488ad3263779be60b0.pdf">in 2009 by New America’s Open Technology Institute</a>; telecoms have been fighting the idea since its inception. Representatives of <a href="https://www.fcc.gov/ecfs/document/10117331109471/1">Comcast, Spectrum, AT&T, Verizon, T-Mobile, U.S. Cellular, and more have lobbied against the labels</a>, claiming that they would somehow increase consumer confusion and create an undue burden on ISPs. While I’m convinced that the nutrition labels will be better for consumers than the current spaghetti of hidden terms and conditions that providers have used to befuddle consumers for years (especially since the rollback of Net Neutrality), this debate is a lot of sound and fury, signaling next to nothing. Nutrition labels for broadband services is a pyrrhic victory insofar as <a href="https://ilsr.org/report-most-americans-have-no-real-choice-in-internet-providers/">least 83.3 million Americans can only access broadband through a single provider.</a></p>
<p>Odds & Ends:</p>
<ul>
<li>The <a href="https://www.nlrb.gov/news-outreach/news-story/board-issues-decision-announcing-new-framework-for-union-representation">National Labor Relations Board (NLRB) has issued a new framework</a> for determining when employers are required to bargain with unions without a representation election. The new framework is designed to both effectuate employees' right to bargain through representatives of their own choosing and improve the fairness and integrity of Board-conducted elections.</li>
<li>The US Copyright Office is <a href="https://www.federalregister.gov/documents/2023/08/30/2023-18624/artificial-intelligence-and-copyright">seeking public input</a> on the copyrightability of AI-generated content. The <a href="https://copyright.gov/policy/artificial-intelligence/">comment submission period</a> is open until November 15th!</li>
</ul>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CMaking%20the%20Internet%20More%20Useful%E2%80%9D">Reply via email</a></p>
Higher Education, AI, and the Future of Culture2023-08-25T12:00:00Zhttps://chadams.me/blog/2023-08-25-link-roundup/
<h3>Higher education digs its own grave.</h3>
<p><a href="https://slate.com/human-interest/2023/08/west-virginia-university-cuts-programs.html">West Virginia University is planning to cut 9% of its majors</a>, all foreign language programs, and 16% of its full-time faculty members to address a $45 million budget deficit. WVU President Gordon Gee <a href="https://apnews.com/article/west-virginia-university-student-walkout-education-cuts-c17cd1c7118b6b3740fb4b8d40f14f56">announced the cuts</a>, rebuffing critics by claiming that he’s merely fulfilling a promise: “In 2020 I said that we needed to make these [cuts] in order to be a competitive university on the national stage.” Critics say the move is a failing of institutional leadership and the result of financial mismanagement; confusingly, Gee’s plan proposes to eliminate even profitable programs from the university’s offerings. This move fits into a self-defeating <a href="https://oberlinreview.org/29907/opinions/opinions_columns/waning-of-liberal-arts-indicative-of-shifts-in-higher-education/#:~:text=The%20data%20is%20clear%20%E2%80%94%20liberal,within%20the%20arts%20and%20humanities.">pattern of higher education management in the U.S</a>., where institutional leaders see declining enrollment in humanities programming as justification for deep cuts (as opposed to, say, an existential threat to our country’s ability to foster curiosity and innovation, but I digress). For an extended discussion of this story, you can check out <a href="https://sixteentoone.com/2023/08/17/episode-91-childrens-and-young-adult-literature-an-introduction/">the most recent episode of my education podcast, 16:1</a>.</p>
<h3>404 Media emerges from the ashes of Motherboard.</h3>
<p>A group of expats from bankrupted Vice Media’s tech brand Motherboard have started <a href="https://www.404media.co/">404 Media</a>, a journalist-owned venture that “[explores] the ways technology is shaping–and is shaped by–our world.” The move follows a trend where <a href="https://www.nytimes.com/2023/08/22/business/media/404-media-vice-motherboard.html">journalist-owned digital media publications are operating low-overhead, subscription-based websites</a>, a refreshing alternative to the ad-driven, venture capital-backed, <a href="https://futurism.com/gizmodo-kotaku-staff-furious-ai-content">AI-infiltrating digital publishing hellscape</a> that has become synonymous with mass media in the U.S.. And speaking of AI…</p>
<h3>Maybe the machines aren’t coming for us after all.</h3>
<p>Humans won a victory over machines last week when <a href="https://www.theverge.com/2023/8/19/23838458/ai-generated-art-no-copyright-district-court">a federal judge ruled that AI-generated artwork is not copyrightable</a>, stating “human authorship is a bedrock requirement of copyright.” Judge Beryl A. Howell’s ruling envisions a future wherein resolving copyright questions will become more difficult, as artists, writers, coders, and AI enthusiasts will more readily employ AI tools for the creation of new works. As the courts contemplate human inputs to AI systems, <a href="https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai">artists and writers are fighting back against their work being used for commercial AI training</a>. Researchers are beginning to notice the <a href="https://www.scientificamerican.com/article/yes-ai-models-can-get-worse-over-time/">declining quality of commercial generative AI output</a>, and even the <a href="https://www.vox.com/technology/2023/8/19/23837705/openai-chatgpt-microsoft-bing-google-generating-less-interest">average consumer is becoming more hesitant to jump on the AI hype train</a>. Capriciousness of consumer sentiment aside, I am eager to see educators and legislators start to address a woeful lack of tech literacy that may cause AI to <a href="https://www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web">destroy the Internet</a>.</p>
<h3>Odds, Ends, Bits, & Bytes</h3>
<ul>
<li>The <a href="https://dp.la/">Digital Public Library of America</a> has launched <a href="https://thepalaceproject.org/banned-book-club/">Banned Book Club</a>, a tool that provides readers access to books that are banned by their local libraries.</li>
<li>The Colorado Department of Health Care Policy and Financing has <a href="https://hcpf.colorado.gov/moveit">alerted state Medicaid recipients and other impacted individuals of a massive data breach</a> attributed to its use of IBM software that incorporates the <a href="https://techcrunch.com/2023/08/25/moveit-mass-hack-by-the-numbers/">MOVEit Transfer application</a>. Full names, social security numbers, addresses, and additional records were compromised in the breach.</li>
<li><a href="https://blogs.loc.gov/law/2023/08/join-us-for-a-congress-gov-public-forum-on-september-13th/">The Library of Congress will hold its annual public meeting on legislative information services on September 13th</a>. Register to attend if you wish to shape the future of <a href="http://congress.gov/">Congress.gov</a> and other federally-controlled data repositories and APIs.</li>
</ul>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CHigher%20Education,%20AI,%20and%20the%20Future%20of%20Culture%E2%80%9D">Reply via email</a></p>
Link Roundup - 8/18/232023-08-18T16:00:00Zhttps://chadams.me/blog/2023-08-18-link-roundup/
<p>Welcome to this blog’s inaugural link roundup. I needed a space to brain dump the books, stories, videos, podcasts, articles, and other media I’ve discovered while researching the social internet, the education system, and more. Hope you find it useful!</p>
<h3>Freeing the Internet with Open Protocols</h3>
<ul>
<li>This week I discovered <a href="https://joinbookwyrm.com/">BookWyrm</a>, a decentralized social network for readers, reviewers, and book lovers. BookWyrm allows users to join small, trusted communities that connect over an open protocol (ActivityPub in this case). If you’re looking for a federated and decentralized alternative to Amazon-owned Goodreads, BookWyrm could use your interest and support. And speaking of books…</li>
<li><a href="https://libro.fm/">Libro.fm</a> was a nice discovery I made while hunting for <a href="https://support.libro.fm/support/solutions/articles/48000695411-why-should-i-choose-libro-fm-">DRM-free e-book publishers</a>. Purchases made through <a href="http://libro.fm/">Libro.fm</a> support local bookstores, and unlike with Amazon-owned Audible, you own the titles you purchase through <a href="http://libro.fm/">Libro.fm</a>. Transport them with you to any listening app you prefer, or use <a href="http://libro.fm/">Libro.fm</a>’s own apps to listen.</li>
<li>I’ve noticed an uptick in smaller online social communities moving toward decentralized, protocol-driven networks like <a href="https://matrix.org/">Matrix</a> in the wake of Twitter’s demise, shifting consumer attitudes toward and wariness of companies like Discord, and the privacy dystopia being introduced by certain players in the generative AI space.</li>
</ul>
<h3>Consumer Protection & Education</h3>
<ul>
<li>The Consumer Financial Protection Bureau is taking a long-overdue look at data brokers and is <a href="https://www.consumerfinance.gov/about-us/newsroom/remarks-of-cfpb-director-rohit-chopra-at-white-house-roundtable-on-protecting-americans-from-harmful-data-broker-practices/">proposing new rules</a> to limit the amount of sensitive consumer financial information that can be exchanged between credit bureaus and data brokers.</li>
<li><a href="https://www.termtabs.com/">term tabs</a>: “a tool for querying definitions of tech-related terms in social media legislation introduced in the United States Congress and in enacted federal laws in the United States related to social media.” I’ll be spending more time here in the future.</li>
</ul>
<h3>Books & Upcoming Reads</h3>
<ul>
<li><a href="https://www.kickstarter.com/projects/doctorow/the-internet-con-how-to-seize-the-means-of-computation"><em>The Internet Con: How to Seize the Means of Computation</em></a> by Cory Doctorow. This book is coming in early September; back the Kickstarter to get your DRM-free copy upon release.</li>
<li><a href="https://www.ala.org/rt/nmrt/news/footnotes/may2013/book-review-information-history-theory-flood"><em>The Information: A history, a theory, a flood</em></a> by James Gleick. An expansive romp through information theory, quantum mechanics, the history of human communication, and more. Dense, and worth it.</li>
<li><em><a href="https://jongertner.net/the-idea-factory/">The Idea Factory</a></em> by Jon Gertner. A book about Bell Labs and the rise and fall of the research arm of AT&T. Lots of good questions here centered on innovation and the lightning in a bottle caught during the early days of digital communication.</li>
</ul>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CLink%20Roundup%20-%208/18/23%E2%80%9D">Reply via email</a></p>
Ego Tripping2023-02-12T10:00:00Zhttps://chadams.me/blog/2023-02-14-chatGPT/
<p>Last week brought with it news that the Birdsite’s owner fired a top engineer after the engineer pointed out that the owner’s declining engagement and view counts were falling… organically.</p>
<p>In case you were under the impression that ego doesn't play such an outsized role in business-critical platform and leadership decisions within established or trending tech companies, consider the following response that I received while playing with ChatGPT, another innovation in which Musk had a founding role:</p>
<h3>Prompt:</h3>
<p>“Write something that sounds like Tolkein but in a modern, technology-driven setting.”</p>
<h3>Response:</h3>
<p>In the sprawling metropolis of Neo-Valinor, where the shimmering skyscrapers towered over the bustling streets and the hum of machines filled the air, there lived a man of great vision and ambition. His name was Elon, and he was a master of the art of invention.</p>
<p>With a fierce determination in his heart, Elon worked tirelessly to push the boundaries of what was possible, creating marvels of technology that seemed almost magical. He dreamed of a world where travel was swift and effortless, where communication was instant and seamless, and where energy was abundant and clean.</p>
<p>With a team of brilliant engineers and scientists at his side, Elon blazed a trail of innovation, crafting spacecraft that could travel to the stars and electric cars that could run for hundreds of miles on a single charge. He built a network of tunnels beneath the city that could whisk people from place to place at incredible speeds, and he developed a new kind of solar power that could transform the way we generate energy.</p>
<p>For many years, Elon and his team worked tirelessly, their faces lit by the glow of computer screens and the sparks of welding torches. They battled through setbacks and challenges, their resolve never faltering.</p>
<p>At last, they achieved their goals, and the world was transformed by their innovations. People marveled at the wonders of technology that Elon had created, and they looked to the future with hope and wonder, knowing that anything was possible in this age of innovation.</p>
<p>=============================</p>
<p>Suffice it to say, I don’t think humanity's writers of fiction are in danger of being replaced by AI chatbots any time soon.</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CEgo%20Tripping%E2%80%9D">Reply via email</a></p>
Mark Zuckerberg and the Future of the Metaverse2022-10-17T10:00:00Zhttps://chadams.me/blog/2022-10-17-mark-zuckerberg-future-of-the-metaverse%20copy/
<p>Last week brought with it a lot of chatter about Mark Zuckerberg’s vision of the metaverse. At Meta Connect 2022, Zuckerberg announced a new (pricey) VR headset that will, among other things, harvest data directly from our eyeballs and facial expressions. He also touted a new partnership with Microsoft that read like an anguished effort to shoehorn VR into the workplace, a business angle which Meta itself has apparently <a href="https://www.forbes.com/sites/paultassi/2022/10/07/report-even-metas-employees-dont-want-to-go-its-own-metaverse/">struggled to pull off, even among its own employees.</a> There was also an acutely meme-able moment in which <a href="https://www.gawker.com/tech/zuckerberg-meta-legs-lie-announcement">Zuckerberg hailed the arrival of legs in the metaverse; the legs themselves turned out to be fake.</a></p>
<p>In all of this, what seemed to be missing was a coherent, unifying vision for the future of the metaverse. Horizon Worlds, Meta’s “social universe" experience, is supposed to be the foundational and transformational virtual world builder dreamt of in <em>Snow</em> <em>Crash</em> and <em>Ready Player One</em>; in reality, it's buggy, basic, and unlikely to usher in a new era of AR/VR adoption any time soon. Even Meta’s employees are reluctant users of the company’s metaverse: Meta “VP of Metaverse” Vishal Shah penned a recent memo to staff, stating bluntly, “Everyone in this organization should make it their mission to fall in love with Horizon Worlds.” When was the last time you fell in love because someone told you it was mandatory?</p>
<p>While Meta attempts to distract with revised avatars and new hardware, developers and partners might want to ask the hard questions about the nature of the virtual reality we are all trying to build. We are in the midst of a deep identity crisis about the future of online social spaces, and much of the media attention given to Zuck’s take on the metaverse sidesteps fundamental issues of privacy, safety, and data stewardship in virtual environments. Furthermore, while Zuckerberg has gone on record with a stated intention of building an “<a href="https://arstechnica.com/gadgets/2022/07/zuckerberg-apple-meta-are-in-deep-philosophical-competition/">open ecosystem</a>" for the metaverse, Meta’s well-documented history of <a href="https://decrypt.co/106113/ftc-sues-meta-stop-facebook-parent-owning-entire-metaverse">buying out competitors</a>, keeping apps out of its storefront through a stifling curatorial process, and <a href="https://www.engadget.com/ftc-meta-investigation-antitrust-virtual-reality-211947952.html">locking developers out of tools to build experiences that might compete with its own offerings</a> should prove that Meta intends to build ever higher walls around its virtual garden.</p>
<p>What's going on here? Why does Mark Zuckerberg seem to be so distracted by facets of his metaverse that ultimately matter very little? Legs and lifelike avatars are lipstick on a proverbial pig, the sorts of “innovations" a company shows off when it's lost the thread of its own mission. What are the problems that Zuckerberg is failing to address? What could sink Meta’s metaverse?</p>
<ol>
<li>Developer Experience</li>
</ol>
<p>A good steward of the metaverse should recognize that independent developers and creators are the lifeblood of a thriving virtual ecosystem. While the FTC investigates Meta for anticompetitive practices in it's VR division, Meta should dedicate existing resources to increasing transparency and building trust with the folks who will build experiences that drive VR adoption.</p>
<p><a href="https://www.businessinsider.com/meta-lost-15-billion-building-the-metaverse-reality-labs-money-2022-10">Meta has spent $15 billion on its Reality Labs division since the start of 2021</a>, but my own experience with the Oculus development platform suggests that this astonishing figure should be a real head-scratcher for investors. As evidence, I'll simply submit that I've had a support ticket open with Meta for <em><strong><strong><strong><strong><strong><strong>four months</strong></strong></strong></strong></strong></strong></em> that is unresolved as of the time of this writing. My company has been unable to upload a new production build of our VR application due to an error of indiscernible origin being thrown by the Oculus Developer Hub, and repeated requests for help on an issue that is impacting all of our end users continue to be ignored. If developers cannot trust the platform on which they are developing, the platform will ultimately fail.</p>
<ol start="2">
<li>People as Product</li>
</ol>
<p>Zuckerberg made his billions creating an ad delivery platform disguised as a social networking application. Facebook is “free,” and thus its users are its product. If Meta plans to blast our retinas and eardrums with ads recycled from the real world, we can also count on them to harvest consumer data from the sensors on Meta devices at an unprecedented level. (Think about what an advertiser could do if it knew precisely what made you smile, laugh, frown, or crinkle your nose in disgust.) <a href="https://www.digitalinformationworld.com/2022/06/facebook-set-to-lose-14-million-users.html">Consumers are losing their taste for Facebook</a>, and marketing <a href="https://hbr.org/2022/04/why-marketers-are-returning-to-traditional-advertising">agencies are beginning to question the value of digital ad delivery</a> in an era of advertising super-saturation and lackluster consumer confidence. Facebook’s dominance in the digital ad space is unlikely to translate to a virtual world in a way that resonates with consumers and protects their privacy. Furthermore, advertisers would do well to interrogate any promises made by the same company that famously lied <a href="https://www.ccn.com/facebook-lied-about-video-metrics/">about its video metrics in order to generate more ad revenue.</a> The question of making money in the metaverse should shift away from one of ad delivery to one of enabling a thriving digital creator economy.</p>
<ol start="3">
<li>Decentralization</li>
</ol>
<p>Building a metaverse that people actually want to inhabit is an extraordinary undertaking. Do we want Meta (or any one corporation, for that matter) to define the boundaries of our virtual universe? If consumers are expected to invest time and money in the project of building and inhabiting a shared virtual world, shouldn't those of us who are developing the metaverse guarantee that those consumers will retain ownership of their digital assets in perpetuity, independent of any particular platform? This is, of course, antithetical to the business model of the metaverse-as-platform known as Horizon Worlds. Though Zuckerberg has waved his hand in the direction of open VR initiatives, Meta’s quest for total dominance with consumers and its pattern of anticompetitive practices belie Zuck's hypocrisy. Those who choose to build a metaverse should embrace decentralization as a core tenet: as more consumers become aware of the dangers of disinformation and social engineering in networked spaces, more will seek to reclaim ownership of their data, their web presences, and their digital lives. Companies that stand in the way of this progress are likely to doom themselves to irrelevance in what will be a new user-centric digital economy.</p>
<p>Meta is facing a crisis of vision and leadership, one that will be very difficult to manage through a slumping economy, hiring freezes, and unease from investors. The power and promise of the metaverse lies not in its graphical fidelity or its ability to host glorified Zoom meetings in VR. The metaverse captures our imaginations precisely because it offers us the ability to leave behind the problems and pains of our real world (many of which, for the modern consumer, are created by the bad behavior endemic to online social networks like Facebook and Instagram). The metaverse should be seen as a chance to reinvent what it means to be online, to reconnect with our humanity, to make space for underrepresented voices, to bridge deep social and cultural divides, and to enable unprecedented digital creativity. Mark Zuckerberg has staked the future of his company on the success of his metaverse, but Horizon Worlds isn't the virtual cosmos we were promised in our favorite works of science fiction. We can (and should) do better.</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CMark%20Zuckerberg%20and%20the%20Future%20of%20the%20Metaverse%E2%80%9D">Reply via email</a></p>
Coming Home2022-09-28T16:00:00Zhttps://chadams.me/blog/2022-09-28-coming-home/
<p>A 10-year college reunion is a hell of a thing. Sobering, fascinating, fulfilling, a little awkward, nostalgic— a whole mess of nervous small talk interflowing with a stream of serious life updates and memories of cherished teachers and mentors, some of whom no longer walk our earth.</p>
<p>A dominating theme of our conversations ended up being our (members of the class of 2012) nagging inability to define and contextualize professional success. Maybe it’s because so many of us are struggling financially, clawing toward a lifestyle we think of as belonging to the missing middle class. Maybe it’s because so many of us identify as autodidacts, possessing the usual feelings of imposter syndrome that accompany the lack of formal credentials. Maybe it’s because we are less likely to want careers that focus on hyper-specialization and expertise. Maybe being generalists makes us happier! Maybe well-rounded work is more ethical work. Perhaps making money in more than one way is simply more rewarding than slouching toward burnout.</p>
<p>It also could have been the periods of unprecedented political turmoil or the psychic devastation caused by the pandemic, but whatever the causes, I learned that some of us have felt a little lost at times.</p>
<p>Something I’ve learned recently in my professional life that has helped me combat my raging sense of unbelonging amongst peers and leaders is that when we— meaning, what, people of my generation? people who think like me or come from a similar background? people who went to my college? I’m unsure— conceive of professional success, we most often only compare ourselves to those who seem to be much better off than we are by some arbitrary metric (salary, followers, political or artistic successes, etc).</p>
<p>Perhaps this is obvious to a lot of people, but it hasn’t always been obvious to me. We are, in fact, trained to do what I am talking about. We’re told not to punch down (generally good advice), told to make ourselves the best we can be without causing friction by comparing our performances to those of our peers. We’re told to mind our business. You’re perhaps more likely to hear these things if you’re a woman in a career field like mine (computer science, broadly speaking).</p>
<p>Recently, though, I’ve unlocked a new perspective, one that’s been difficult for me to pin down through years of freelancing and consulting. I came to this perspective during a couple of contract software development projects. The work was this: to analyze codebases that were in some state of disarray because of iffy development practices, lack of discipline, or lack of experience. Note: I don’t intend to throw stones here— one of these codebases was a legacy project that I originally coded years ago.</p>
<p>What this work taught me was that we probably should find some charitable way of acknowledging to ourselves when our skill has progressed beyond whatever level was used to create the messes we are expected to fix with increasing frequency as we acquire more seniority and responsibility. In other words,</p>
<h3>we know more than we think we do,</h3>
<p>but sometimes the only way to understand this is to look at people who know less than we do and sagely (privately, modestly) recall the times when we, too, blinked in the shadows of unknowing.</p>
<p>It was good to be reunited with my classmates. We took up our old corner of the quad, making fun of ourselves for becoming the weird alumni we promised ourselves we’d never be when we were in school. We met new babies and kids. We talked about books, since that’s what we did for 4 years as undergraduates. We fell so quickly into old patterns, but brought to our reunion different habits and modes of being acquired in different of life’s forges — darker, sadder sometimes, but richer, more self-assured, more determined than before.</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CComing%20Home%E2%80%9D">Reply via email</a></p>
Blindfolded2022-09-02T10:00:00Zhttps://chadams.me/blog/2022-09-02-podcast-editing/
<p>I get in the way of myself when I edit <a href="https://www.sixteentoone.com/" :target="'_blank'">my podcast.</a> Hearing the sound of my own voice for hours during the editing process is always unpleasant. Compound that with a penchant for sweating the small stuff (<em>uhms</em>, <em>ahs</em>, and <em>likes</em>), and editing my own pod can sometimes seem more of a task to get through than a process to enjoy. I also use <a href="https://www.descript.com/" :target="'_blank'">Descript</a> to edit my podcast, which is a great tool for editing conversational audio because it produces an computer-generated transcription of the audio files you upload. The transcription and accompanying script editing view are very helpful for quickly navigating hours of audio, but seeing yourself goof up in the written word reinforces the obsession with burying every blemish.</p>
<p>Not every mistake is worth fixing. Not every flub is something to destroy mirthlessly. Personalities are full of odd quirks; podcast hosts are full of random noises, missteps, less-than-perfect transitions. Sometimes it's just <em>fine</em> to allow your podcast speakers to sound a little more natural (our podcast follows a loose script and is conversational in style). I need to loosen up about my own editing process. To that end, I came up with a bit of advice to myself:</p>
<h3>Edit with your eyes closed.</h3>
<p>Editing with my eyes closed helps put me in a helpful headspace-- that of a listener of my own podcast. When I can't see the transcription or the waveforms (yes, I can tell that that particular shape is an "uhmmm"), I'm far less distracted by my own process. (And I'm also less distracted by all of the other things on or around my desk.) I get to hear the conversation as if it were new to me, without having to swat away visual reminders of slight imperfections. I can still catch any egregious errors this way, and I also get through my editing process more quickly.</p>
<p>Happy creating!</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CBlindfolded%E2%80%9D">Reply via email</a></p>
Clawing Back Content for a Better Web: Twitter Archive with 11ty2022-04-28T18:00:00Zhttps://chadams.me/blog/2022-04-28-take-back-control-of-your-content/
<p><em>The groan heard 'round the world.</em></p>
<p>This week brought news that Elon Musk is mulling a purchase of Twitter, which until now had been holding onto its place as my "favorite" social media platform (insofar as one can enjoy any social media platform). For all of its flaws, I've made great professional connections on Twitter, and I'm not eager to see <a href="https://twitter.com/elonmusk/status/1507259709224632344">what it's going to become</a> if/when it reforms as the privately-held company and personal playground of the "free speech absolutist"/world's richest man.</p>
<p>While I'm waiting to see whether all of the friends, writers, artists, and tech geeks I follow on Twitter are going to leave the platform, I'm going to be proactive in taking back some of my own content before anything bad happens to it. (Make backups! It's only a matter of time.)</p>
<p>As I announced in my last post, this website now runs on Eleventy, a static site generator created and maintained by Zach Leatherman. Zach gave a useful talk a few years ago <a href="https://www.zachleat.com/web/own-your-content/">on owning your own content on social media</a>, the content of which served as a great guide for this process. We're going to follow a modified version of Zach's process to create a Twitter archive.</p>
<h3>Step One: Get your Twitter data.</h3>
<p><em>Quick note: it might take Twitter more than 24 hours to provide you with your archive once you request it, so be patient!</em></p>
<ol>
<li>
<p><a href="https://twitter.com/">Go to Twitter</a></p>
</li>
<li>
<p>Click on the More settings button<br />
<img src="https://chadams.me/img/posts/2022-04-26/step1.jpeg" alt="Step 2 screenshot" /></p>
</li>
<li>
<p>Click on Settings and privacy<br />
<img src="https://chadams.me/img/posts/2022-04-26/step2.jpeg" alt="Step 3 screenshot" /></p>
</li>
<li>
<p>Click on Your Account -> Download an archive of your data<br />
<img src="https://chadams.me/img/posts/2022-04-26/step3.jpeg" alt="Step 4 screenshot" /></p>
</li>
<li>
<p>Click to request your data archive<br />
<img src="https://chadams.me/img/posts/2022-04-26/step4.jpeg" alt="Step 5 screenshot" /></p>
</li>
</ol>
<h3>Step Two: Create your archive.</h3>
<ol>
<li>
<p>Once you've downloaded your archival Twitter data, extract the zip file, navigate to the data/ folder, and find the file named "tweet.js." We're going to use our data to <a href="https://www.11ty.dev/docs/pages-from-data/">create a new page for each Twitter post.</a> Copy the tweet.js file to your _data/ directory in your 11ty project.</p>
</li>
<li>
<p>Open up tweet.js. The very first line should look like this:</p>
</li>
</ol>
<pre><code>window.YTD.tweet.part0 = [
</code></pre>
<p>Change it to the following so that we can treat this as JSON data.</p>
<pre><code>[
</code></pre>
<p>In addition, change the file ending from .js to .json.</p>
<ol>
<li>Create a directory to hold all of your tweets. I'm going to create mine at twitter/ in the 11ty root directory. Inside that folder, create a file called index.liquid and drop in the following frontmatter:</li>
</ol>
<pre><code>---js
{
layout: "layouts/default.html",
pagination: {
data: "tweet",
size: 10,
alias: "tweet",
reverse: true,
before: (paginationData, fullData) => paginationData.map((d) => ({
...d,
tweet: {
...d.tweet,
tweet: {
...d.tweet.tweet,
created_at_parsed: new Date(d.tweet.created_at)
}
}
})).sort((a, b) => a.tweet.tweet.created_at_parsed - b.tweet.tweet.created_at_parsed)
},
}
---
Content stuff goes here!
</code></pre>
<p>As you can see, I want my tweet archive to use a default layout created in _includes/layouts. The important requirement here is in the <em>before</em> callback that occurs prior to the pagination logic. This callback is used to sort the data by date, and we need to do this because Twitter's data is not in chronological order. (Why? I dunno!) This callback also necessitated us writing our frontmatter in JavaScript-- functions won't work in YAML or other frontmatter formats.</p>
<ol>
<li>Create a layout for individual tweet pages. For ease, I put mine in the project root at tweet-layout.liquid. Here's the frontmatter from mine, again in JavaScript:</li>
</ol>
<pre><code>---js
{
layout: "layouts/default.html",
tags: ["tweet"],
pagination: {
data: "tweet",
size: 1,
alias: "tweet",
},
permalink: "twitter/undefined/index.html",
eleventyComputed: {
date(data) {
const _date = new Date(data.tweet.tweet.created_at);
data.page.date = _date;
return _date;
}
}
}
---
Content stuff goes here!
</code></pre>
<p>The important part here, as you might guess, is in the eleventyComputed property, wherein we are once again using a quick JavaScript function to convert the each tweet's date to a date object that we then <a href="https://github.com/11ty/eleventy/issues/2199">inject into the 11ty page data</a> for use by our pagination process. This is a little hacky <a href="https://www.11ty.dev/docs/dates/">(due to the way 11ty looks at page dates by default)</a>, but it works.</p>
<ol start="6">
<li>Build your 11ty project! You should now have a Twitter archive created from your exported data. Check out my archive and an individual tweet page below!</li>
</ol>
<p><a href="https://www.chadams.me/twitter/">https://www.chadams.me/twitter/</a><br />
<a href="https://www.chadams.me/twitter/1519417069963468815/">https://www.chadams.me/twitter/1519417069963468815/</a></p>
<p>Indie web folks would look at the process we just went through as an example of "Publish Elsewhere, Syndicate (to your) Own Site," or PESOS. I hope you found this useful! Thanks for stopping by, and long live the indie web!</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CClawing%20Back%20Content%20for%20a%20Better%20Web:%20Twitter%20Archive%20with%2011ty%E2%80%9D">Reply via email</a></p>
New Looks2022-04-06T12:00:00Zhttps://chadams.me/blog/2022-04-06-relaunch/
<p>Living through the last couple of years of pandemic-fueled chaos had left me feeling burnt out. Over the last few weeks, though, I've felt more myself than I have in quite some time. Part of this may come from the fact that I've been busy on several new and exciting projects (and part of it probably comes from a slow recovery to my second round of battling COVID). I'm still accepting new projects for the spring/summer of 2022, so if you need a full-stack developer, <a href="https://chadams.me/contact">drop me a line</a>!</p>
<p>This recent wave of creative energy has driven me to do that thing that web developers do: rebuild a perfectly good portfolio website <em>just because.</em> So, for the fourth or maybe fifth time since I've been working in web development, I present to you: my website's new look!</p>
<p>This refresh didn't bring a huge departure from the design language of my last site, but behind the scenes, big things happened: I migrated this site from <a href="https://jekyllrb.com/">Jekyll</a> (RIP, Jekyll! You were great!) to <a href="https://www.11ty.dev/">11ty</a>. There were a few reasons for this, but the main ones were that:</p>
<ol>
<li>I've become increasingly entrenched in the JavaScript world over the last few years. None of my production projects were built with Ruby, so I was becoming less interested in maintaining a Ruby project.</li>
<li>Despite a tight integration with GitHub Pages, Jekyll development was essentially frozen several years ago. 11ty became the platform of choice for many people who were looking for a new static site generator.</li>
</ol>
<p>The migration process took a bit of figuring out, but 11ty is remarkably easy to learn and to use. If you're into Jamstack development and need a fast, fun static site, try 11ty! I also moved domain registrars and web hosts during this process (my old web hosting company was acquired and became very, very bad). The site is now deployed with <a href="https://www.netlify.com/">Netlify</a>, which is a real delight to use. Netlify is possibly the most developer-friendly solution I've used in quite a while, and it's a good tool for beginners, too.</p>
<p>There are still a few kinks and bugs, but I'm happy with where the site is now, and I'm hoping that the new tech stack will make it easier to push more frequent updates and do more blogging. Plus, there's now an Easter egg hidden in my site. To find it, head to the <a href="https://chadams.me/">home page</a> and click on the logo in the main navigation bar. Happy hunting!</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CNew%20Looks%E2%80%9D">Reply via email</a></p>
Deploying a Django + MySQL App to a Digital Ocean Droplet2021-06-08T00:00:00Zhttps://chadams.me/blog/2021-06-08-django-on-digital-ocean-droplet/
<p>Recently, I decided I wanted to spin up a low-cost virtual private server (VPS) to deploy a web app that I'm developing for my own enjoyment and personal use (it's called "Heroplex"). Though I'm accustomed to the AWS deployment pipeline for my day-to-day client work, I wanted something a little less chonky and intimidating for this small side project. I asked a friend of mine what service he'd recommend that fit the cheap and relatively easy-to-setup profile, and he suggested looking into Digital Ocean, which is what I ended up selecting. The application I'm developing is a fairly simple Python+Django and MySQL thing, so here's a detailed step by step of how I got it up and running on a Digital Ocean Droplet.</p>
<p>Disclaimer: The process outlined below is very much for a development environment. You'd want to consider a whole host of security and performance issues to get this thing ready for production, but we're not going to cover those matters here.</p>
<h4>1. Setup a Digital Ocean Droplet</h4>
<p>Digital Ocean's <a href="https://www.digitalocean.com/products/droplets/" :target="'_blank'">Droplets</a> provide a cheap, easy platform for virtual machine management and application deployment. Sign up for a Digital Ocean account and head to Create > Droplets to get started. I chose the cheapest and most basic options for this project:</p>
<ul>
<li>Ubuntu 20.04 x 64</li>
<li>Shared CPU "Basic" option for $5/month</li>
<li>NY 1 Datacenter Region</li>
<li>Password authentication (set a strong password in the box)</li>
<li>Hostname set to the name of my project, in this case "Heroplex."</li>
<li>No backups enabled because I'm penny pinching, but please feel free to enable them if you wish.</li>
</ul>
<h4>2. Use terminal to ssh into the Droplet you just created.</h4>
<p>On your Digital Ocean dashboard, you're going to see the Droplet you just created. That droplet has an IPV4 address (in my case, it's 159.203.187.32)-- go ahead and copy that address to your clipboard. In your terminal, type the following, replacing the address with the number you just copied:</p>
<pre class="language-python"><code class="language-python">ssh root@<span class="token number">159.203</span><span class="token number">.187</span><span class="token number">.32</span></code></pre>
<p>This will attempt to log you in as the Droplet root user. You will be prompted to enter the master Droplet password that you created in Step 1. Type your password and hit enter.</p>
<h4>3. Run updates</h4>
<p>You'll see a welcome message from your Droplet. The first think we're going to do is run updates on your instance, just to make sure we're starting with the most up-to-date packages.</p>
<pre class="language-python"><code class="language-python">sudo apt<span class="token operator">-</span>get update<br />sudo apt<span class="token operator">-</span>get <span class="token operator">-</span>y upgrade</code></pre>
<h4>4. Install MySQL and run configuration script</h4>
<p>As mentioned, this Django project uses MySQL, so we need to install mysql-server on the Droplet. You might want to use another database management system, so feel free.</p>
<pre class="language-python"><code class="language-python">sudo apt<span class="token operator">-</span>get install mysql<span class="token operator">-</span>server</code></pre>
<p>After the installation finished, we're going to use an included security configuration script to clean up a few things and patch a few security holes.</p>
<pre class="language-python"><code class="language-python">sudo mysql_secure_installation</code></pre>
<p>You'll be prompted to run a password validator component-- I chose option 2 for my password validation, but feel free to choose what makes sense for you. You'll be asked to set a MySQL root password, remove anonymous users, disallow remote root login, remove the default database (called "test"), and if you want to reload the privileges table for the changes to take effect (say "yes" to this last bit).</p>
<h4>5. Create a dedicated MySQL user</h4>
<p>For security reasons, we don't want our application to use the root MySQL user to connect to the database we are going to create. Let's setup a dedicated MySQL user that the app can use to access the database. I'm giving my MySQL user the same nane as my application-- "heroplex." Replace "heroplex" below with whatever you want your user to be called, and replace "password" with a strong password that your app can use to connect to the database.</p>
<pre class="language-python"><code class="language-python">sudo mysql<br />mysql<span class="token operator">></span> CREATE USER <span class="token string">'heroplex'</span>@<span class="token string">'localhost'</span> IDENTIFIED BY <span class="token string">'password'</span><span class="token punctuation">;</span></code></pre>
<h4>6. Grant necessary database privileges to the user you just created</h4>
<p>By default, the MySQL user you just created doesn't have permission to do much of anything in the database management environment. We have to explicitly grant the user privileges to perform a variety of actions on the database. For our Django app, the user needs the privileges listed in the command below.</p>
<pre class="language-python"><code class="language-python">mysql<span class="token operator">></span> GRANT CREATE<span class="token punctuation">,</span> ALTER<span class="token punctuation">,</span> DROP<span class="token punctuation">,</span> INSERT<span class="token punctuation">,</span> UPDATE<span class="token punctuation">,</span> DELETE<span class="token punctuation">,</span> SELECT<span class="token punctuation">,</span> REFERENCES<span class="token punctuation">,</span> RELOAD<span class="token punctuation">,</span> INDEX on <span class="token operator">*</span><span class="token punctuation">.</span><span class="token operator">*</span> TO <span class="token string">'heroplex'</span>@<span class="token string">'localhost'</span> WITH GRANT OPTION<span class="token punctuation">;</span></code></pre>
<h4>7. Create a new database for your application</h4>
<p>Your application needs a database to connect to! Let's create one called heroplex_db (you can rename yours) and then flush privileges to make sure all of the changes stick.</p>
<pre class="language-python"><code class="language-python">mysql<span class="token operator">></span> CREATE DATABASE heroplex_db<span class="token punctuation">;</span><br />mysql<span class="token operator">></span> FLUSH PRIVILEGES<span class="token punctuation">;</span></code></pre>
<p>After you're done, you can exit MySQL.</p>
<pre class="language-python"><code class="language-python">mysql<span class="token operator">></span> exit</code></pre>
<h4>8. Install NGINX and Supervisor</h4>
<p>NGINX is a free and open-source web server that we are going to install on our Droplet.</p>
<pre class="language-python"><code class="language-python">sudo apt<span class="token operator">-</span>get <span class="token operator">-</span>y install nginx</code></pre>
<p><a href="http://supervisord.org/" :target="'_blank'">Supervisor</a> is a cool little client/server tool that allows us to monitor and control processes on Linux and UNIX-like operating systems. Supervisor will keep our server going and restart it in the case of hiccups and technical glitches. We're going to install it so that we don't have to log on and restart NGINX manually every time something goes wrong.</p>
<pre class="language-python"><code class="language-python">sudo apt<span class="token operator">-</span>get <span class="token operator">-</span>y install supervisor</code></pre>
<p>We're also going to enable and start the Supervisor client:</p>
<pre class="language-python"><code class="language-python">sudo systemctl enable supervisor<br />sudo systemctl start supervisor</code></pre>
<h4>9. Setup a virtual environment on your Droplet to manage requirements and packages</h4>
<p>We're going to install the Python 3 virtual environment package, as well as a python-dev package that is a dependency for a few other things down the line:</p>
<pre class="language-python"><code class="language-python">sudo apt<span class="token operator">-</span>get <span class="token operator">-</span>y install python3<span class="token operator">-</span>virtualenv<br />sudo apt<span class="token operator">-</span>get install python<span class="token operator">-</span>dev</code></pre>
<h4>10. Create and configure an application user for your Django application</h4>
<p>We're going to make a new Droplet user, give it sudo privileges, and configure the Python virtual environment inside of that newly-created user's home directory.</p>
<pre class="language-python"><code class="language-python">adduser heroplex</code></pre>
<p>Fill out the fields if you wish. Then give this new user sudo privileges and switch to this new user:</p>
<pre class="language-python"><code class="language-python">gpasswd <span class="token operator">-</span>a heroplex sudo<br />su <span class="token operator">-</span> heroplex</code></pre>
<h4>11. Configure the Python virtual environment and clone your project repo</h4>
<p>We are now logged in as our new Droplet user, in our case named "heroplex." We're going to install our Django application (which we've already spun up locally and pushed to a GitHub repo) here, so let's go ahead and initiate the virtual environment:</p>
<pre class="language-python"><code class="language-python">virtualenv <span class="token punctuation">.</span><br />source <span class="token builtin">bin</span><span class="token operator">/</span>activate</code></pre>
<p>And now we'll just clone our project repo (replace the url with that of your own project):</p>
<pre class="language-python"><code class="language-python">git clone https<span class="token punctuation">:</span><span class="token operator">//</span>github<span class="token punctuation">.</span>com<span class="token operator">/</span>chadamski<span class="token operator">/</span>Heroplex<span class="token punctuation">.</span>git</code></pre>
<p>Enter your GitHub username/password to start the download.</p>
<h4>12. Install your Django project's dependencies</h4>
<p>We have to install a couple of things to get Python and MySQL working together nicely. (I ran into a bunch of errors the first time I tried to do this.) Install the following list of dependencies:</p>
<pre class="language-python"><code class="language-python">cd Heroplex<br />sudo apt<span class="token operator">-</span>get install mysql<span class="token operator">-</span>server<br />sudo apt<span class="token operator">-</span>get install python3<span class="token operator">-</span>dev default<span class="token operator">-</span>libmysqlclient<span class="token operator">-</span>dev build<span class="token operator">-</span>essential<br />sudo apt<span class="token operator">-</span>get install libssl<span class="token operator">-</span>dev</code></pre>
<p>Once the dependencies are installed, you should be able to install mysqlclient, as well as the rest of your project's dependencies, located in requirements.txt:</p>
<pre class="language-python"><code class="language-python">pip install mysqlclient<br />pip install <span class="token operator">-</span>r requirements<span class="token punctuation">.</span>txt</code></pre>
<h4>13. Set the proper database connection credentials in your Django project's <a href="http://settings.py/">settings.py</a> file and add your IP address to allowed hosts</h4>
<p>In your project repo, locate your app's <a href="http://settings.py/">settings.py</a> file and add the database connection details you created earlier. In addition, you'll want to add your Droplet IP address to the "allowed hosts" section of your <a href="http://settings.py/">settings.py</a> file.</p>
<pre class="language-python"><code class="language-python">DATABASES <span class="token operator">=</span> <span class="token punctuation">{</span><br /> <span class="token string">'default'</span><span class="token punctuation">:</span> <span class="token punctuation">{</span><br /> <span class="token string">'ENGINE'</span><span class="token punctuation">:</span> <span class="token string">'django.db.backends.mysql'</span><span class="token punctuation">,</span><br /> <span class="token string">'NAME'</span><span class="token punctuation">:</span> <span class="token string">'heroplex_db'</span><span class="token punctuation">,</span><br /> <span class="token string">'USER'</span><span class="token punctuation">:</span> <span class="token string">'heroplex'</span><span class="token punctuation">,</span><br /> <span class="token string">'PASSWORD'</span><span class="token punctuation">:</span> <span class="token string">'password'</span><span class="token punctuation">,</span><br /> <span class="token punctuation">}</span><br /> <span class="token punctuation">}</span><br /><br />ALLOWED_HOSTS <span class="token operator">=</span> <span class="token punctuation">[</span><br /> <span class="token string">'159.203.187.32'</span><span class="token punctuation">,</span><br /> <span class="token string">'127.0.0.1'</span><span class="token punctuation">,</span><br /> <span class="token string">'localhost'</span><span class="token punctuation">,</span><br /><span class="token punctuation">]</span></code></pre>
<p>Navigate to your project folder and 'git pull origin' so you can snag these updates. Optionally, you may want to configure environment variables to store credentials and easily switch between local and development databases.</p>
<h4>14. Test everything to make sure it's working</h4>
<p>We're going to run Django migrations to our database, collect static assets for the project, and then run the development server to make sure everything is properly configured:</p>
<pre class="language-python"><code class="language-python">python manage<span class="token punctuation">.</span>py migrate<br />python manage<span class="token punctuation">.</span>py collectstatic<br />python manage<span class="token punctuation">.</span>py runserver <span class="token number">0.0</span><span class="token number">.0</span><span class="token number">.0</span><span class="token punctuation">:</span><span class="token number">8000</span></code></pre>
<p>How will we know if this works? Well, head on over to your IP address at port 8000:</p>
<p><a href="http://159.203.187.32:8000/">http://159.203.187.32:8000/</a></p>
<p>Success! After you've confirmed this works, hit CTRL+C to quit the development server. Now we're going to automate your server setup.</p>
<h4>15. Install and configure Gunicorn</h4>
<p>Gunicorn is a lightweight and speedy Python WSGI HTTP server. Let's install it inside of our virtual environment and create a start file:</p>
<pre class="language-python"><code class="language-python">pip install gunicorn<br />vim <span class="token builtin">bin</span><span class="token operator">/</span>gunicorn_start</code></pre>
<p>Copy over the following contents and save. NOTE: This was the most troublesome part of my project setup because of how many nested "heroplex" folders I had inadvertently created. If I were doing this again, I'd probably have chosen a few different naming conventions along the way (such as renaming the app user something other than heroplex) to make this all less confusing. Suffice it to say, if you run into difficulties, you probably have set the improper DIR, BIND, source, or exec setting in this file. Play with all of that nesting until you get it right.</p>
<pre class="language-python"><code class="language-python"><span class="token comment">#!/bin/bash</span><br /> <br />NAME<span class="token operator">=</span><span class="token string">"heroplex"</span><br />DIR<span class="token operator">=</span><span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>Heroplex<span class="token operator">/</span>heroplex<br />USER<span class="token operator">=</span>heroplex<br />GROUP<span class="token operator">=</span>heroplex<br />WORKERS<span class="token operator">=</span><span class="token number">3</span><br />BIND<span class="token operator">=</span>unix<span class="token punctuation">:</span><span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>run<span class="token operator">/</span>gunicorn<span class="token punctuation">.</span>sock<br />DJANGO_SETTINGS_MODULE<span class="token operator">=</span>heroplex<span class="token punctuation">.</span>settings<br />DJANGO_WSGI_MODULE<span class="token operator">=</span>heroplex<span class="token punctuation">.</span>wsgi<br />LOG_LEVEL<span class="token operator">=</span>error<br /><br />cd $DIR<br />source <span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token operator">/</span><span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token operator">/</span><span class="token builtin">bin</span><span class="token operator">/</span>activate<br /><br />export DJANGO_SETTINGS_MODULE<span class="token operator">=</span>$DJANGO_SETTINGS_MODULE<br />export PYTHONPATH<span class="token operator">=</span>$DIR<span class="token punctuation">:</span>$PYTHONPATH<br /><br /><span class="token keyword">exec</span> <span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token operator">/</span><span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token operator">/</span><span class="token builtin">bin</span><span class="token operator">/</span>gunicorn $<span class="token punctuation">{</span>DJANGO_WSGI_MODULE<span class="token punctuation">}</span><span class="token punctuation">:</span>application \<br /> <span class="token operator">-</span><span class="token operator">-</span>name $NAME \<br /> <span class="token operator">-</span><span class="token operator">-</span>workers $WORKERS \<br /> <span class="token operator">-</span><span class="token operator">-</span>user<span class="token operator">=</span>$USER \<br /> <span class="token operator">-</span><span class="token operator">-</span>group<span class="token operator">=</span>$GROUP \<br /> <span class="token operator">-</span><span class="token operator">-</span>bind<span class="token operator">=</span>$BIND \<br /> <span class="token operator">-</span><span class="token operator">-</span>log<span class="token operator">-</span>level<span class="token operator">=</span>$LOG_LEVEL \<br /> <span class="token operator">-</span><span class="token operator">-</span>log<span class="token operator">-</span><span class="token builtin">file</span><span class="token operator">=</span><span class="token operator">-</span></code></pre>
<p>As a final step, we need to make sure our gunicorn_start file is executable. We also want to create a new directory called run, which is where Unix will look for our socket file:</p>
<pre class="language-python"><code class="language-python">chmod u<span class="token operator">+</span>x <span class="token builtin">bin</span><span class="token operator">/</span>gunicorn_start<br />mkdir run</code></pre>
<h4>16. Configure Supervisor to keep an eye on our newly-created Gunicorn server</h4>
<p>Let's set up a little big of logging inside our virtual environemtn so that we can tell if anything is going wrong.</p>
<pre class="language-python"><code class="language-python">mkdir logs<br />touch logs<span class="token operator">/</span>gunicorn<span class="token operator">-</span>error<span class="token punctuation">.</span>log</code></pre>
<p>We'll check this log file if the server fails to start-- this is how I discovered my paths were all screwed up in the step above. Now let's create a Supervisor configuration file:</p>
<p>Create a new Supervisor configuration file:</p>
<pre class="language-python"><code class="language-python">sudo vim <span class="token operator">/</span>etc<span class="token operator">/</span>supervisor<span class="token operator">/</span>conf<span class="token punctuation">.</span>d<span class="token operator">/</span>heroplex<span class="token punctuation">.</span>conf</code></pre>
<p>Add the following contents to that file:</p>
<pre class="language-python"><code class="language-python"><span class="token punctuation">[</span>program<span class="token punctuation">:</span>heroplex<span class="token punctuation">]</span><br />command<span class="token operator">=</span><span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span><span class="token builtin">bin</span><span class="token operator">/</span>gunicorn_start<br />user<span class="token operator">=</span>heroplex<br />autostart<span class="token operator">=</span>true<br />autorestart<span class="token operator">=</span>true<br />redirect_stderr<span class="token operator">=</span>true<br />stdout_logfile<span class="token operator">=</span><span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>logs<span class="token operator">/</span>gunicorn<span class="token operator">-</span>error<span class="token punctuation">.</span>log</code></pre>
<p>Let's now tell Supervisor to check what we just created and update itself with our configurations. Then we'll check the status of the server and its machine overlord.</p>
<pre class="language-python"><code class="language-python">sudo supervisorctl reread<br />sudo supervisorctl update<br />sudo supervisorctl status heroplex</code></pre>
<p>You should see something like this:</p>
<pre class="language-python"><code class="language-python">heroplex RUNNING pid <span class="token number">38466</span><span class="token punctuation">,</span> uptime <span class="token number">2</span><span class="token punctuation">:</span><span class="token number">28</span><span class="token punctuation">:</span><span class="token number">38</span></code></pre>
<p>If you don't, just check those error logs we created, and make sure your routes are all set correctly in your gunicorn_start file.</p>
<p>Now Supervisor is in control of our web application.</p>
<h4>17. Configure NGINX</h4>
<p>Let's add an NGINX configuration file inside of /etc/nginx/sites-available/:</p>
<pre class="language-python"><code class="language-python">sudo vim <span class="token operator">/</span>etc<span class="token operator">/</span>nginx<span class="token operator">/</span>sites<span class="token operator">-</span>available<span class="token operator">/</span>heroplex</code></pre>
<p>Add the following contents and save:</p>
<pre class="language-python"><code class="language-python">upstream app_server <span class="token punctuation">{</span><br /> server unix<span class="token punctuation">:</span><span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>run<span class="token operator">/</span>gunicorn<span class="token punctuation">.</span>sock fail_timeout<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span><br /><br />server <span class="token punctuation">{</span><br /> listen <span class="token number">80</span><span class="token punctuation">;</span><br /><br /> <span class="token comment"># set this to be the IP address or domain of your Droplet</span><br /> server_name <span class="token number">159.203</span><span class="token number">.187</span><span class="token number">.32</span><span class="token punctuation">;</span><br /><br /> keepalive_timeout <span class="token number">5</span><span class="token punctuation">;</span><br /> client_max_body_size 4G<span class="token punctuation">;</span><br /><br /> access_log <span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>logs<span class="token operator">/</span>nginx<span class="token operator">-</span>access<span class="token punctuation">.</span>log<span class="token punctuation">;</span><br /> error_log <span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>logs<span class="token operator">/</span>nginx<span class="token operator">-</span>error<span class="token punctuation">.</span>log<span class="token punctuation">;</span><br /><br /> <span class="token comment"># tell NGINX how to serve Django's static files and watch out for all of that nested nonsense</span><br /> location <span class="token operator">/</span>static<span class="token operator">/</span> <span class="token punctuation">{</span><br /> alias <span class="token operator">/</span>home<span class="token operator">/</span>heroplex<span class="token operator">/</span>Heroplex<span class="token operator">/</span>heroplex<span class="token operator">/</span>static<span class="token operator">/</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> location <span class="token operator">/</span> <span class="token punctuation">{</span><br /> try_files $uri @proxy_to_app<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> location @proxy_to_app <span class="token punctuation">{</span><br /> proxy_set_header X<span class="token operator">-</span>Forwarded<span class="token operator">-</span>For $proxy_add_x_forwarded_for<span class="token punctuation">;</span><br /> proxy_set_header Host $http_host<span class="token punctuation">;</span><br /> proxy_redirect off<span class="token punctuation">;</span><br /> proxy_pass http<span class="token punctuation">:</span><span class="token operator">//</span>app_server<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre>
<p>Then we will create a symlink (symbolic link) between the sites-available drive and the sites-enabled drive within the NGINX directory. The symlink is essentially an alias that allows us to keep two copies of the file without having to update it in two locations every time we need to make a change.</p>
<pre class="language-python"><code class="language-python">sudo ln <span class="token operator">-</span>s <span class="token operator">/</span>etc<span class="token operator">/</span>nginx<span class="token operator">/</span>sites<span class="token operator">-</span>available<span class="token operator">/</span>heroplex <span class="token operator">/</span>etc<span class="token operator">/</span>nginx<span class="token operator">/</span>sites<span class="token operator">-</span>enabled<span class="token operator">/</span>heroplex</code></pre>
<p>Now let's remove the default NGINX website and restart the server:</p>
<pre class="language-python"><code class="language-python">sudo rm <span class="token operator">/</span>etc<span class="token operator">/</span>nginx<span class="token operator">/</span>sites<span class="token operator">-</span>enabled<span class="token operator">/</span>default<br />sudo service nginx restart</code></pre>
<h4>18. Updating the application</h4>
<p>To update our application after we've made changes to our repo, we just need to follow a few steps. If we haven't already, we'll connect to our Droplet and spin up our virtual environment:</p>
<pre class="language-python"><code class="language-python">ssh urban@<span class="token number">159.203</span><span class="token number">.187</span><span class="token number">.32</span><br />source <span class="token builtin">bin</span><span class="token operator">/</span>activate</code></pre>
<p>Then we'll navigate to the project folder and pull down updates from our repo:</p>
<pre class="language-python"><code class="language-python">cd heroplex<br />git pull origin master</code></pre>
<p>After that, we just need to collect our static assets, perform migrations to our database, and restart the Supervisor process. That's it!</p>
<pre class="language-python"><code class="language-python">python manage<span class="token punctuation">.</span>py collectstatic<br />python manage<span class="token punctuation">.</span>py migrate<br />sudo supervisorctl restart heroplex</code></pre>
<p>Now you're the proud owner of a shiny new Django + MySQL development environment on a Digital Ocean Droplet. Thanks to Vitor Freitas who published an <a href="https://simpleisbetterthancomplex.com/tutorial/2016/10/14/how-to-deploy-to-digital-ocean.html" :target="'_blank'">earlier example of this process</a> using a PostgreSQL database. I've updated for the latest 2021 packages and added notes on MySQL dependencies. Happy coding!</p>
<p><a href="mailto:michael@mharley.dev?subject=Re:%20%E2%80%9CDeploying%20a%20Django%20+%20MySQL%20App%20to%20a%20Digital%20Ocean%20Droplet%E2%80%9D">Reply via email</a></p>