<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>The Garden(&#39;s Blog)</title>
    <link>https://blog.mecha.garden/</link>
    <description>Status updates and other server news for https://mecha.garden</description>
    <pubDate>Thu, 09 Apr 2026 20:32:32 +0000</pubDate>
    <item>
      <title>Everything&#39;s broken and nobody knows why</title>
      <link>https://blog.mecha.garden/everythings-broken-and-nobody-knows-why</link>
      <description>&lt;![CDATA[Last night the Sharkey server inexplicably stopped receiving inbox requests, rejecting them with a 401 status code. I&#39;ve investigated and for the life of me cannot find a reason for this, nor what the initial incident might have been. &#xA;&#xA;Basically go outside or something, we&#39;ll be back soon I hope.]]&gt;</description>
      <content:encoded><![CDATA[<p>Last night the Sharkey server inexplicably stopped receiving inbox requests, rejecting them with a <code>401</code> status code. I&#39;ve investigated and for the life of me cannot find a reason for this, nor what the initial incident might have been.</p>

<p>Basically go outside or something, we&#39;ll be back soon I hope.</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/everythings-broken-and-nobody-knows-why</guid>
      <pubDate>Thu, 06 Jun 2024 15:18:17 +0000</pubDate>
    </item>
    <item>
      <title>Oh geez, things are broken again</title>
      <link>https://blog.mecha.garden/oh-geez-things-are-broken-again</link>
      <description>&lt;![CDATA[You may notice that mecha.garden is kind of slow today (read: occasionally unreachable). This is apparently due to network congestion at our VPS host, and is an issue they&#39;re aware of but there&#39;s no one available to work on right now. Hopefully whoever is hogging the network (reportedly 50MB/s in and out) finishes up whatever they&#39;re doing soon, but until then (or until the ops people at cyberia get off work) we&#39;re stuck with basically no bandwidth.&#xA;&#xA;Oops ᕕ( ᐛ )ᕗ&#xA;E]]&gt;</description>
      <content:encoded><![CDATA[<p>You may notice that mecha.garden is kind of slow today (read: occasionally unreachable). This is apparently due to network congestion at our VPS host, and is an issue they&#39;re aware of but there&#39;s no one available to work on right now. Hopefully whoever is hogging the network (reportedly 50MB/s in and out) finishes up whatever they&#39;re doing soon, but until then (or until the ops people at cyberia get off work) we&#39;re stuck with basically no bandwidth.</p>

<p>Oops ᕕ( ᐛ )ᕗ
– E</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/oh-geez-things-are-broken-again</guid>
      <pubDate>Tue, 09 Apr 2024 15:03:03 +0000</pubDate>
    </item>
    <item>
      <title>Downtime 2/9/24</title>
      <link>https://blog.mecha.garden/downtime-2-9-24</link>
      <description>&lt;![CDATA[It looks like our hosting provider is having issues, not sure at the moment when this will be resolved. They are working on it but depending on the root cause it could be a lengthy recovery. Rest assured the database is safe so if we need to jump to a different provider or something nothing will be lost except the existing job processing queues. I&#39;ll keep this post updated with info as I get it &#xA;&#xA;UPDATE:&#xA;&#xA;Disk issues have forced an early migration to a new server which is currently underway, still no ETA but the fix is in the works at least &#xA;&#xA;UPDATE 2/10/24:&#xA;&#xA;It looks like the migration is complete, everything should be up and running again. Sorry for the unexpected downtime, the new server should be more stable. (⁠☞⁠ ͡⁠°⁠ ͜⁠ʖ⁠ ͡⁠°⁠)⁠☞]]&gt;</description>
      <content:encoded><![CDATA[<p>It looks like our hosting provider is having issues, not sure at the moment when this will be resolved. They are working on it but depending on the root cause it could be a lengthy recovery. Rest assured the database is safe so if we need to jump to a different provider or something nothing will be lost except the existing job processing queues. I&#39;ll keep this post updated with info as I get it</p>

<p>UPDATE:</p>

<p>Disk issues have forced an early migration to a new server which is currently underway, still no ETA but the fix is in the works at least</p>

<p>UPDATE 2/10/24:</p>

<p>It looks like the migration is complete, everything should be up and running again. Sorry for the unexpected downtime, the new server should be more stable. (⁠☞⁠ ͡⁠°⁠ ͜⁠ʖ⁠ ͡⁠°⁠)⁠☞</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/downtime-2-9-24</guid>
      <pubDate>Fri, 09 Feb 2024 23:42:29 +0000</pubDate>
    </item>
    <item>
      <title>Migration update</title>
      <link>https://blog.mecha.garden/migration-update</link>
      <description>&lt;![CDATA[Well we&#39;re pretty far over the ~2 hour downtime I wanted, so here&#39;s a little status update. &#xA;&#xA;The first two rounds of database migrations went off without issue, thought they took a bit longer than I had anticipated. I had to re-patch a change I had made to allow the instance to talk to the connection pool for the database which I had forgotten about. Fortunately Sharkey is much quicker to build than Firefish, and the image is considerably smaller as well, which is a nice bonus. Currently the last round of migrations is running, but there&#39;s a lot of changes that need to be made so it will probably be another hour minimum before we&#39;re back online. &#xA;&#xA;TLDR: everything&#39;s going fine it&#39;s just slower than I&#39;d guessed (￣┰￣*)]]&gt;</description>
      <content:encoded><![CDATA[<p>Well we&#39;re pretty far over the ~2 hour downtime I wanted, so here&#39;s a little status update.</p>

<p>The first two rounds of database migrations went off without issue, thought they took a bit longer than I had anticipated. I had to re-patch a change I had made to allow the instance to talk to the connection pool for the database which I had forgotten about. Fortunately Sharkey is much quicker to build than Firefish, and the image is considerably smaller as well, which is a nice bonus. Currently the last round of migrations is running, but there&#39;s a lot of changes that need to be made so it will probably be another hour minimum before we&#39;re back online.</p>

<p>TLDR: everything&#39;s going fine it&#39;s just slower than I&#39;d guessed (￣┰￣*)</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/migration-update</guid>
      <pubDate>Sat, 13 Jan 2024 21:30:03 +0000</pubDate>
    </item>
    <item>
      <title>The Garden Is Migrating!</title>
      <link>https://blog.mecha.garden/the-garden-is-migrating</link>
      <description>&lt;![CDATA[This weekend I am planning on migrating mecha.garden to a different Misskey fork, Sharkey. This is due in large part to the recent drought of activity on the Firefish git repo, and my personal dissatisfaction with the direction and pace of Firefish&#39;s development in general.&#xA;&#xA;What to expect during the migration&#xA;&#xA;Because Firefish and Sharkey share a common ancestor the migration should go smoothly, hopefully only needed an hour or two of downtime. Currently I&#39;m planning on that happening around noon EST on Saturday the 13supth/sup. The majority of the time will be spent running database migrations, and one that&#39;s done there shouldn&#39;t bee much else that needs to be done. &#xA;&#xA;What to expect from Sharkey&#xA;&#xA;Sharkey is a soft fork of Misskey, compared to Firefish&#39;s hard fork of Misskey V12(I believe). It should have most of the features you like in Firefish, plus some of their own (user data export, listenbrainz integration 👀). There are some features that will be missing (get your quote-renotes in now), but there are also some superfluous features that have been cut down (did you know there&#39;s an NSFW-detecting machine learning model in Misskey?). All in all the user experience should be similar, with the benefit of all the development and optimization Misskey has received since Firefish forked off, hopefully making things run smoother (and perhaps a bit faster).&#xA;&#xA;As always if things go south, you can check here for updates. Otherwise, see you on the flipside (☞ﾟヮﾟ)☞&#xA;&#xA;\- E]]&gt;</description>
      <content:encoded><![CDATA[<p>This weekend I am planning on migrating mecha.garden to a <em>different</em> Misskey fork, <a href="https://git.joinsharkey.org/Sharkey/Sharkey">Sharkey</a>. This is due in large part to the recent drought of activity on the Firefish git repo, and my personal dissatisfaction with the direction and pace of Firefish&#39;s development in general.</p>

<h2 id="what-to-expect-during-the-migration" id="what-to-expect-during-the-migration">What to expect during the migration</h2>

<p>Because Firefish and Sharkey share a common ancestor the migration <strong>should</strong> go smoothly, hopefully only needed an hour or two of downtime. Currently I&#39;m planning on that happening around noon EST on Saturday the 13<sup>th</sup>. The majority of the time will be spent running database migrations, and one that&#39;s done there shouldn&#39;t bee much else that needs to be done.</p>

<h2 id="what-to-expect-from-sharkey" id="what-to-expect-from-sharkey">What to expect from Sharkey</h2>

<p>Sharkey is a soft fork of Misskey, compared to Firefish&#39;s hard fork of Misskey V12(<del>I believe</del>). It should have most of the features you like in Firefish, plus some of their own (user data export, listenbrainz integration 👀). There are some features that will be missing (get your quote-renotes in now), but there are also some superfluous features that have been cut down (did you know there&#39;s an NSFW-detecting machine learning model in Misskey?). All in all the user experience should be similar, with the benefit of all the development and optimization Misskey has received since Firefish forked off, hopefully making things run smoother (and perhaps a bit faster).</p>

<p>As always if things go south, you can check here for updates. Otherwise, see you on the flipside (☞ﾟヮﾟ)☞</p>

<p>- E</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/the-garden-is-migrating</guid>
      <pubDate>Wed, 10 Jan 2024 20:57:32 +0000</pubDate>
    </item>
    <item>
      <title>Uh Oh!</title>
      <link>https://blog.mecha.garden/uh-oh</link>
      <description>&lt;![CDATA[You might currently be noticing that mecha.garden is down. Rest assured it is indeed down and it is indeed my fault sort of. &#xA;&#xA;A confluence of circumstances involving SSL certificate renewal and hosting provider server migrations has rendered the postgres server kaput due to an issue currently out of my control. I don&#39;t really have an ETA right now on when it will be resolved, so check back here regularly as when I find anything out I will edit this post :(&#xA;&#xA;EDIT: Dec 3, 2023&#xA;&#xA;We&#39;re back in action. The root cause was that I restarted the postgres box after renewing the SSL cert, but the VPS wasn&#39;t able to restart because of an issue the provider (Capsul) was/is experiencing with IP address provisioning relating to a hardware migration. The issue had to be resolved manually by an admin so it took a while before anyone could get to it. Really this was just a perfect storm that resulted in a relatively lengthy downtime, and I don&#39;t think there was really anything I could have done differently other than simply not needing to renew a cert at the same time that Capsul was experiencing networking issues.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>You might currently be noticing that mecha.garden is down. Rest assured it is indeed down and it is indeed my fault sort of.</p>

<p>A confluence of circumstances involving SSL certificate renewal and hosting provider server migrations has rendered the postgres server kaput due to an issue currently out of my control. I don&#39;t really have an ETA right now on when it will be resolved, so check back here regularly as when I find anything out I will edit this post :(</p>

<p>EDIT: Dec 3, 2023</p>

<p>We&#39;re back in action. The root cause was that I restarted the postgres box after renewing the SSL cert, but the VPS wasn&#39;t able to restart because of an issue the provider (<a href="https://capsul.org">Capsul</a>) was/is experiencing with IP address provisioning relating to a hardware migration. The issue had to be resolved manually by an admin so it took a while before anyone could get to it. Really this was just a perfect storm that resulted in a relatively lengthy downtime, and I don&#39;t think there was really anything I could have done differently other than simply not needing to renew a cert at the same time that Capsul was experiencing networking issues.</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/uh-oh</guid>
      <pubDate>Sat, 02 Dec 2023 04:23:59 +0000</pubDate>
    </item>
    <item>
      <title>Are we blog yet</title>
      <link>https://blog.mecha.garden/are-we-blog-yet</link>
      <description>&lt;![CDATA[Yes (I think)&#xA;&#xA;If you&#39;re reading this then that means I&#39;ve successfully configured the Garden&#39;s brand new Writefreely powered blog :) &#xA;&#xA;This will be used for status updates, general server news, moderation decisions, and whatever else. Writefreely is federated so you should be able to follow this blog at @info@blog.mecha.garden.]]&gt;</description>
      <content:encoded><![CDATA[<h4 id="yes-i-think" id="yes-i-think"><em>Yes (I think)</em></h4>

<p>If you&#39;re reading this then that means I&#39;ve successfully configured the Garden&#39;s brand new <a href="https://writefreely.org/">Writefreely</a> powered blog :)</p>

<p>This will be used for status updates, general server news, moderation decisions, and whatever else. Writefreely is federated so you should be able to follow this blog at <code><a href="https://blog.mecha.garden/@/info@blog.mecha.garden" class="u-url mention">@<span>info@blog.mecha.garden</span></a></code>.</p>
]]></content:encoded>
      <guid>https://blog.mecha.garden/are-we-blog-yet</guid>
      <pubDate>Mon, 09 Oct 2023 06:39:12 +0000</pubDate>
    </item>
  </channel>
</rss>