blogging Archives - Waking up in Geelong https://wongm.com/tag/blogging/ Marcus Wong. Gunzel. Engineering geek. History nerd. Wed, 30 Nov 2022 23:09:59 +0000 en-AU hourly 1 https://wordpress.org/?v=6.7.1 23299142 On the radio talking trains with ABC Ballarat https://wongm.com/2022/11/on-the-radio-again-abc-ballarat-talking-train/ https://wongm.com/2022/11/on-the-radio-again-abc-ballarat-talking-train/#comments Wed, 23 Nov 2022 04:30:00 +0000 https://wongm.com/?p=20566 It happened to me earlier this year and it’s just happened again – a missed call from a radio producer wondering whether I was free to chat on air the next morning on the topic of trains. This time around it was ABC Radio Ballarat, who had seen my recent posts on the Ballarat line […]

The post On the radio talking trains with ABC Ballarat appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
It happened to me earlier this year and it’s just happened again – a missed call from a radio producer wondering whether I was free to chat on air the next morning on the topic of trains.

VLocity train in the platform at Ballarat station

This time around it was ABC Radio Ballarat, who had seen my recent posts on the Ballarat line through Bacchus Marsh and curve easing for faster trains, and thought it would be of interest to their listeners.

I said yes, and so I was up early the next morning jabbering on about the history of the Ballarat line.

We’ve seen a lot of changes to the Ballarat train line over the last couple decades, but it’s only when you piece it all together that you see the sheer scale of the works that have been done.

Marcus Wong is an avid train fan and has been writing about the Ballarat line over the past few weeks on his blog Waking Up in Geelong, unearthing some answers to the strange quirks in how it was built.

You can check me out at the ABC Radio website.

Or listen to it below.

Unfortunately the recording cuts off abruptly at the end, but luckily it’s only the last 30 seconds or so.

Post retrieved by 35.215.163.46 using

The post On the radio talking trains with ABC Ballarat appeared first on Waking up in Geelong.

]]>
https://wongm.com/2022/11/on-the-radio-again-abc-ballarat-talking-train/feed/ 2 20566
Listening to the sound of my own voice https://wongm.com/2022/05/on-abc-melbourne-radio-spencer-street-subways/ https://wongm.com/2022/05/on-abc-melbourne-radio-spencer-street-subways/#comments Thu, 26 May 2022 21:30:00 +0000 https://wongm.com/?p=19698 There is one peril to being the number one hit on Google for an obscure topic – radio producers looking for a talking head will try and chase you down to get you onto the air. The story started on May 25, when somebody on Reddit posted a photo titled “I want to go down […]

The post Listening to the sound of my own voice appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
There is one peril to being the number one hit on Google for an obscure topic – radio producers looking for a talking head will try and chase you down to get you onto the air.

The story started on May 25, when somebody on Reddit posted a photo titled “I want to go down the forbidden ramp at Southern Cross Station. I’ve got no idea what’s down there, but I’m assuming dragons?” over at /r/Melbourne.

Now I’ve got a lot of photos online showing the old underpass beneath the station, so it wasn’t long before someone shared them to the thread.

Subway under the suburban platforms, looking east from platforms 11 and 12

Setting off the hits on my photo gallery.

By July 2021 piece on the remains of the Spencer Street Station subway also got a run, alongside my follow up piece Building the Spencer Street Station subway – a history.


Victorian Railways annual report 1961-62

Then the next morning something different – messages via various channels from a producer at ABC Radio Melbourne.

Hey Marcus

Is there a number I can call you on?

Love to chat to you about the Southern cross tunnels…

Anyway, I gave them a ring, and later that day I was on the radio blabbering on about the tunnels at Southern Cross Station.

Which was then followed by a handful of text messages and emails from friends and family who listen to ABC Radio and heard me on air. 😂

You can listen to it at the ABC Radio website, or below.

Footnote

I even managed in to slip in a bonus piece into the interview – why the Western Ring Road takes a kink around Ardeer.

Post retrieved by 35.215.163.46 using

The post Listening to the sound of my own voice appeared first on Waking up in Geelong.

]]>
https://wongm.com/2022/05/on-abc-melbourne-radio-spencer-street-subways/feed/ 2 19698
Getting ‘hugged to death’ by Hacker News https://wongm.com/2022/04/hacker-news-link-crashed-my-website/ https://wongm.com/2022/04/hacker-news-link-crashed-my-website/#respond Tue, 05 Apr 2022 21:30:00 +0000 https://wongm.com/?p=19757 The story starts when I published a piece on the backyard approaching lighting at Adelaide Airport to my blog. Later that day I noticed that my website was now running rather sluggishly, so checked the logs – an explosion in traffic. And the reason – someone over at Hacker News had shared a link to […]

The post Getting ‘hugged to death’ by Hacker News appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
The story starts when I published a piece on the backyard approaching lighting at Adelaide Airport to my blog.

Later that day I noticed that my website was now running rather sluggishly, so checked the logs – an explosion in traffic.

And the reason – someone over at Hacker News had shared a link to it, and it was getting heaps of traffic.

I’m an occasional visitor to the site, which is a social news website like Reddit, but with a focus on computer science and entrepreneurship – so I was kinda surprised to see it getting a run over there.

Of course, given the tech background of the readers, discussion soon went off onto the ‘hug of death‘ all of the traffic was giving my poor web server.

As well as jokes about the poor state of Australia’s internet.

And fixing it?

I run my websites on a virtual private server (VPS) that I manage myself, so unfortunately for me I was on my to manage the flood of traffic.

My initial solution was the simplest, but also costly – just scale up my server to one with twice the CPU cores and twice the RAM.

That made my site more responsive, but I didn’t want to double my monthly web hosting costs, so it was time to get smart. These symptoms sounded exactly like my server.

If your VPS gets overloaded, and reaches the maximum number of clients it can serve at once, it will serve those and other users will simply get a quick failure. They can then reload the page and maybe have greater success on the second try.

This sounds bad, but believe me, it’s much better to have these connections close quickly but leave the server in a healthy state rather than hanging open for an eternity. Surprisingly you can get better performance from a server that has fewer child processes but responds faster than it is to have a server with more child processes that it is unable to handle.

I had to dig into the settings of Apache to optimise them for the resources my server had available.

Most operating systems’ default Apache configurations are not well suited for smaller servers – 25 child processes or more is common. If each of your Apache child processes uses 120MB of RAM, then your VPS would need 3GB just for Apache.

One visitor’s web browser may request 4 items from the website at once, so with only 7 or 8 people trying to load a page at the same time your cloud server can become overloaded. This causes the web page to hang in a constantly loading state for what seems like an eternity.

It is often the case that the server will keep these dead Apache processes active, attempting to serve content long after the user gave up, which reduces the number of processes available to serve users and reduces the amount of system RAM available. This causes what is commonly known as a downward spiral that ends in a bad experience for both you and your site’s visitors.

What you should do is figure out how much RAM your application needs, and then figure out how much is left, and allocate most of that to Apache.

I used the handy apache2buddy tool to analyse the RAM usage on my server, and calculate the maximum number of processes Apache should be allowed to spin up.

And since making these changes, the uptime of my websites has skyrocketed.

The status page found above is powered by the “Cloudflare Worker – Status Page” tool created by Adam Janiš.

Footnote: the ‘Slashdot effect’

Having your website taken down when a popular site links to you has been a thing for years – it’s called the ‘Slashdot effect‘ after one of the early social news websites of the 2000s – Slashdot.

Post retrieved by 35.215.163.46 using

The post Getting ‘hugged to death’ by Hacker News appeared first on Waking up in Geelong.

]]>
https://wongm.com/2022/04/hacker-news-link-crashed-my-website/feed/ 0 19757
Moving containerised logs by train https://wongm.com/2019/10/moving-containerised-logs-by-train/ https://wongm.com/2019/10/moving-containerised-logs-by-train/#comments Mon, 28 Oct 2019 20:30:00 +0000 https://wongm.com/?p=13478 Since the first railways were built in Australia, timber has been moved by train – a traffic that still exists today, but somewhat hidden thanks to the rise of containers. In the old days Timber tramways would transport freshly cut logs to sawmills deep in the forest. Museums Victoria image MM 5821 With the sawn […]

The post Moving containerised logs by train appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
Since the first railways were built in Australia, timber has been moved by train – a traffic that still exists today, but somewhat hidden thanks to the rise of containers.

Loader ready to push another pile of logs into a container

In the old days

Timber tramways would transport freshly cut logs to sawmills deep in the forest.


Museums Victoria image MM 5821

With the sawn timber then being loaded onto main line trains for transport to the construction industry.


Photo by Weston Langford

But by World War II the timber tramways were gone: replaced by road transport, and supplying a network of town based sawmills established following the 1939 Black Friday bushfires.

Enter woodchips

In the 1960s the timber industry switched from sawlogs to clearfell logging for export woodchips.

Initially this traffic was by road, but in 1999 newly privatised freight operator Freight Australia started to move this traffic to rail.

Awaiting departure from the log yard at Bairnsdale

Logs were loaded onto flat wagons at Bairnsdale and Wodonga.

Loaded log flats at Bairnsdale

Hauled by train to Geelong.

X43 powers over the hills towards Hillside

Where they were unloaded at the Midway woodchip plant and sent through the chipper.

H3 heads back out of the Midway Siding

To be loaded by conveyor into ships.

New Woodchip loader at Corio Quay North

Bound for Japanese paper mills.

Sun goes down over Corio Bay

These log trains continued running until 2009, when the Black Saturday bushfires burnt out the source of the logs.

And now containers

In the 2000s another way of moving timber by rail emerged – sawlogs stuffed inside standard ISO containers, ready to be exported by sea.

Logs are delivered to the rail terminal by truck and then stockpiled.

Loader goes back for another claw full of logs

The loader grabs a claw full of logs, and loads them into a cradle.

Loader delivers another claw full of logs into the cradle

Once the cradle is full, it is placed in front of an empty 40 foot container.

Front end loader pushes logs into a 40 foot ISO container at Bordertown

A specially fitted loader then pushes the logs into the container.

Specially fitted loader pushes the logs into a 40 footer container

Them the forklift takes the empty log cradle away for reloading.

Forklift moves the empty log cradle for reloading

And so the process repeats.

Specially fitted loader pushes the logs into a 40 footer container

Current containerised log rail services include:

Footnote: log wagons

Moving logs by container on standard flat wagons is a lot more flexible than constructing dedicated log wagons, which Freight Australia discovered when trying to expand their fleet.

Freight Australia found a new emerging market in transporting plantation logs and to capitalise, the Sale to Bairnsdale line had to be reopened. For the log business to expand, wagons that could carry logs were in urgent demand.

In order to satisfy this demand, older disused container flats, open wagons and louvre vans were all identified as having the potential to be converted into log wagons by having their sides and canopies removed. Further checks were undertaken to determine the suitability of each wagon to support the load where the stanchion cradles were to be attached.

The container flats and the open wagons proved straight forward, however, the VLEX louvre van design had to be analysed as its canopy needed to be fully removed. The analysis revealed, as with many of the original PTC wagons, that the center sill on the vans actually took 95+ % of the load.

The cradle frames for the logs were then designed so that they could be huck bolted onto the various wagon classes, this allowed for easy replacement if damaged and the ability to convert these wagons back to container flats if the need arose.

Flexibility and the ability to convert wagons readily for any commodity was a bonus for these wagons however, unlike previous log wagons built, these wagons were not fitted with bulkheads. To fit bulkheads to these wagons would mean excessive extra costs, shorten their effective carrying load length and reduce their flexibility. To enable these wagons to be accredited for operation in Victoria, Freight Australia had to demonstrate the safe securing of the logs namely in the longitudinal direction without bulkheads being fitted.

The ROA Manual of Engineering Standards and Practices requires an overall load sustaining minimum capacity in the longitudinal direction equal to the gravity force of the load multiplied by 4. i.e. survive a 4G de-acceleration. Freight Australia demonstrated the safety of these wagons, loaded with logs and why bulkheads were not required by: –

• Carrying out impact trials of Log wagons to determine the load movements at speeds between 8kph to 15kph. These dramatic test could not replicate 4G as to do so would have meant destroying a wagon however, it did demonstrate the controlled way the load shifted. The mass of the logs tied down and jammed between the stanchions and the friction between the logs meant the log movements were contained within the outline diagram and none of the logs broke away from their total mass;

• Modelling the forces and the loads required to move the loaded logs between the stanchions and calculating the sufficiency of the log restraining systems;

• Reviewing other Log Transporters and their practices. This included the American Railroads, who are governed by the AAR guidelines, and more importantly our competitors in the road industry who follow the Department of Transport Guidelines. Both operations aren’t required to operate their vehicles with bulkheads.

Further reading

Post retrieved by 35.215.163.46 using

The post Moving containerised logs by train appeared first on Waking up in Geelong.

]]>
https://wongm.com/2019/10/moving-containerised-logs-by-train/feed/ 11 13478
How many blog posts do I write in a year? https://wongm.com/2015/07/many-blog-posts-in-a-year/ https://wongm.com/2015/07/many-blog-posts-in-a-year/#comments Thu, 16 Jul 2015 21:30:12 +0000 http://wongm.com/?p=6194 I’ve just sat down and run the numbers – if I continue at my current blog posting rate, after one year I will have published a total of 142 new entries! My current posting schedule is as follows: Two posts a week here (104 posts/year) Once a fortnight about European railways at www.eurogunzel.com (26 posts/year) […]

The post How many blog posts do I write in a year? appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
I’ve just sat down and run the numbers – if I continue at my current blog posting rate, after one year I will have published a total of 142 new entries!

Pile of unopened mX newspapers after the evening peak is over

My current posting schedule is as follows:

As to how I managed to churn out so many blog posts, I don’t actually sit down at the same time every weekend and type out the posts for the next week. Instead, my workflow is as flows:

Step 1:

Add an entry to my ever increasing list of prospective blog post topics. Normally they are just links to newspaper articles, interesting reports in PDF format, or a collection of photos I’m intending to write more about.

Step 2:

Dig through my list of draft entries until I find something that grabs my interest, then start writing and further research.

Step 3:

Hit a roadblock and procrastinate. Writers block, a dead end on the research front, or a lack of photos are common causes.

Step 4 (optional):

Realise I have bitten off more than I can chew for one blog post, and spin off part of it into a future post. A variant of this is when I find other interesting bits and pieces while researching one subject, resulting in a new entry being added to my list of prospective topics.

Step 5:

Decide the post is finished, and put it into my pending articles pile.

Step 6:

Dig through my pending articles pile, and add them to my list of scheduled posts.

Step 7:

You eventually see the article online.

Footnote

So how long does my workflow take?

My recent ‘Fairness in PTV fare evasion penalties?‘ post started as a draft back in December 2014, and required three separate editing sessions to polish up.

My ‘Where does Geelong’s sewage go?‘ was a much bigger job, being almost two years in the making – I started it way back in August 2013, spent some time on it in December 2014, then polished it off in July 2015.

Post retrieved by 35.215.163.46 using

The post How many blog posts do I write in a year? appeared first on Waking up in Geelong.

]]>
https://wongm.com/2015/07/many-blog-posts-in-a-year/feed/ 5 6194
Fixing my blog robot https://wongm.com/2015/05/wordpress-scheduled-post-issues/ https://wongm.com/2015/05/wordpress-scheduled-post-issues/#comments Sun, 24 May 2015 21:30:54 +0000 http://wongm.com/?p=6025 One thing you might not know about this site is that I don't actually wake up each morning and type up a new blog post - I actually write them ahead of time, and then they get pushed out to the site at a time I decide. Unfortunately this doesn't always work, such as what happened to me a few weeks ago.

XPT derailed outside Southern Cross - July 11, 2014

The post Fixing my blog robot appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
One thing you might not know about this site is that I don’t actually wake up each morning and type up a new blog post – I actually write them ahead of time, and set them up to be published at a future time. Unfortunately this doesn’t always work, such as what happened to me a few weeks ago.

XPT derailed outside Southern Cross - July 11, 2014

I use WordPress to host my various blog sites, and it has a feature called “scheduled posts” – set the time you want the post to go online, and in theory they’ll magically appear in the future, without any manual intervention.

For this magic to happen, WordPress has to regularly check what time it is, check if any posts are due to be published, and if so, publish them – a process that is triggered in two different ways:

  • run the check every time someone visits the site, or
  • run the check based on a cron job (scheduled task)

The first option is unreliable because it delays page load times, and you can’t count on people visiting a low traffic web site, so the second option is what I put in place when setting up my server.

I first encountered troubles with my scheduled posts in early April.

My initial theory was that a recently installed WordPress plugin was to blame, running at the same time as the scheduled post logic and slowing it down.

I removed the plugin, and scheduled posts on this site started to work again – I thought it was all fixed.

However, a few weeks later I discovered that new entries for my Hong Kong blog were missing in action.

I took a look the the config for my cron job, and it seemed to be correct.

*/2 * * * * curl http://example.com/wp-cron.php > /dev/null 2>&1

I hit the URL featured in the command, and it triggered the publication of a new blog post – so everything good on that front!

I then dug a bit deeper, and ran the curl command directly on my server.

user@server:~$ curl http://example.com/wp-cron.php
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved 
<a href="http://www.example.com/wp-cron.php">here</a>.
</p>
<hr>
<address>Apache Server at example.com Port 80</address>
</body></html>

Bingo – I had found my problem!

Turns out I had previous added a non-www to www redirect for the website in question via a new .htaccess rule – and by default curl doesn’t follow HTTP redirects.

The end result was my cron job hitting a URL, finding a redirect but not following it, resulting in the PHP code never being executed, and my future dated blog posts laying in limbo.

my fix was simple – update my cron job to hit the www. version of the URL, and since then, my future dated blog posts have all appeared on the days they were supposed to.

About the lead photo

The train in the lead photo is the Melbourne-Sydney XPT – on 11 July 2014 it derailed near North Melbourne Station due to a brand new but poorly designed turnout.

Post retrieved by 35.215.163.46 using

The post Fixing my blog robot appeared first on Waking up in Geelong.

]]>
https://wongm.com/2015/05/wordpress-scheduled-post-issues/feed/ 1 6025
Tracing a performance issue on my web server https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/ https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/#comments Mon, 05 Jan 2015 20:30:07 +0000 http://wongm.com/?p=5385 Managing my various web sites can be difficult at times, and my experience the other weekend was no different. My day started normally enough, as I logged onto my VPS and installed the latest security patches, then set to work on uploading new photos to my site. It was then I noticed my web site was taking minutes to load pages, not seconds, so I started to dig into the cause.

Server statistics logged by New Relic

The post Tracing a performance issue on my web server appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
Managing my various web sites can be difficult at times, and my experience the other weekend was no different. My day started normally enough, as I logged onto my VPS and installed the latest security patches, then set to work on uploading new photos to my site. It was then I noticed my web site was taking minutes to load pages, not seconds, so I started to dig into the cause.

Server statistics logged by New Relic

My initial setup

After I moved from shared web hosting, my collection of websites had been running on a $5 / month VPS from Digital Ocean – for that I got 1 CPU, 512 MB of RAM, and 20 GB of disk space. On top of that I used an out-of-the-box Ubuntu image, and installed Apache for the web server and MySQL for the database server.

I then installed a number of separate WordPress instances for my blogs, a few copies of Zenphoto to drive my different photo galleries, and then a mishmash of custom code for a number of other side projects. All of that is exposed via four different domain names, all of which sit behind the CloudFlare CDN to reduce the load on my server.

With some many web sites running on just 512 MB of RAM, performance was an issue! My first fix was to setup a 1 GB swap file to give some breathing room, which did stabilise the situation, but MySQL would still crash every few days when the server ran out of memory.

Swapping out Apache for the much less memory intensive Nginx web server is one way to fix the issue, but I didn’t have time for that. My solution – cron jobs to check the status of my server and restart the services as required!

The first script I came up with checked if the MySQL service was running, and start it up if it wasn’t.

service mysql status| grep 'mysql start/running' > /dev/null 2>&1
if [ $? != 0 ]
then
SUBJECT="MySQL service restarted $(date)"
service mysql status|mail -s "$SUBJECT" me@example.com
sudo service mysql start
fi

My second attempt negated the need for the first script, as it checked to see how much memory was free on my server, and restarted Apache if it was less than a given threshold.

#Minimum available memory limit, MB
THRESHOLD=300

available=$(free -m|awk '/^Swap:/{print $4}')
if [ $available -lt $THRESHOLD ]
then
SUBJECT="Apache service restarted $(date)"
service apache2 status|mail -s "$SUBJECT" me@example.com
sudo service apache2 restart
fi

Under normal load my cron job would restart Apache every day or so, but it did keep the database server up for the rest of the time.

Something is not right

After realising my web site was taking minutes to load pages, not seconds, I started to dig into my server logs. CPU load was hitting 100%, as was memory consumption, and my cron job was restarting Apache every few minutes – something wasn’t quite right!

My first avenue of investigation was Google Analytics – I wanted to find out if the spike in load was due to a flood of new traffic. While the Slashdot effect is a nice problem to have, but in my case it wasn’t to be – incoming traffic was normal.

I then took a look at my Apache access logs – they are split up by virtual host, so I had a number of log files to check out. The first suspicious entries I found were brute force attacks on my WordPress login pages – blocking those was simple, but the server load was still high.

Spending my way out

When looking to upgrade a system to handle more traffic, there are two completely different ways to go about it:

  • Be smart and optimise what you already have, to do more with the same resources
  • Throw more resources at the problem, and just ignore the cause

My server was already nearing the 20 GB disk space limitation set by Digital Ocean on their $5 / month VPS, so I figured an upgrade to next size VPS might fix my problem. Upgrading a Digital Ocean ‘droplet’ is simple job with their ‘Fast-Resize’ functionality – it takes about a minute, but in my case the option wasn’t available – I had to do it the hard way:

  1. shut down my server,
  2. create a snapshot of the stopped virtual machine,
  3. spin up a new Digital Ocean server,
  4. restore my snapshot to the new server,
  5. point CloudFlare from my old server IP address to the new one.

All up it took around 30 minutes to migrate from my old server to my new one, but at least with CloudFlare being my public facing DNS host, I didn’t have to wait hours for my new IP address to propagate across the internet!

Unfortunately, the extra resources didn’t fix my problem – CPU load was still through the roof.

Digging for the root cause

I first installed the htop process viewer on my server, and was able to see that MySQL was using up far much more CPU than normal – presumably my caching wasn’t working right, and my web pages were having to be generated with fresh database queries each time.

Next I fired up a MySQL console, and had a look at the currently running queries. Here I noticed a curious looking query over and over again:

SELECT @serachfield ...

A check of the code deployed to my server indicated that the query was thanks to the search function in Zenphoto, and when I went back into my Apache access logs, I eventually found the problem – a flood of hits on my photo gallery.

Apache web server access logs

Each line in the logs looked like the following:

108.162.250.234 – – [21/Dec/2014:04:32:03 -0500] “GET /page/search/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/beacon-3.newrelic.com HTTP/1.1” 404 2825 “https://railgallery.wongm.com/page/search/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648)”

Each request was bound for “http://js-agent.newrelic.com/nr-476.min.js” or other files hosted at newrelic.com, and the user agent always appeared to be Internet Explorer 8.

New Relic is a software analytics tool I have installed on my server, and on seeing the multiple references to it in my access logs, I remembered that I had updated my version of the New Relic agent just before my performance issues had started. Had I found a bug in it?

The cause

A check of the HTML source of the page in question showed a link to js-agent.newrelic.com embedded in the page, so I came up with the following explanation for the load on my server:

  1. A user hits https://railgallery.wongm.com/page/search/SEARCH_TERM
  2. The New Relic Javascript file at http://js-agent.newrelic.com/nr-476.min.js somehow gets loaded as a relative path, and not an absolute one, which results in a request to:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/nr-476.min.js
  3. My server would then treat the above URL as valid, delivering a page, which then includes a relative link to js-agent.newrelic.com/nr-476.min.js a second time, which then results in a page request to this URL:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js
  4. And so on recursively:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js

With the loop of recursive page calls for a new set of search results, each requiring a fresh database query, it was no wonder my database server was being hit so hard.

As an interim fix, I modified the Zenphoto code to ignore search terms that referenced New Relic, and then rolled back to the older version of the New Relic agent.

sudo apt-get remove newrelic-php5
sudo apt-get remove newrelic-php5-common
sudo apt-get remove newrelic-daemon
sudo apt-get autoremove newrelic-php5
sudo apt-get install newrelic-php5-common=4.15.0.74
sudo apt-get install newrelic-daemon=4.15.0.74
sudo apt-get install newrelic-php5=4.15.0.74

I then raised a support case for New Relic to look into my issue. In an attempt to reproduce the issue, I rolled forward with the current version of the New Relic agent to play ‘spot the difference’, but I couldn’t find any, and the errors also stayed away.

I’m writing this one off as a weird conflict between the updated New Relic agent running my server, and an old version of the browser monitor javascript file cached by a single remote user.

Conclusion

After working through my performance issues I now know more about what my web server is doing, and the extra RAM available following the upgrade means my horrible cron job hacks are no longer required to keep the lights on!

As for the steps I will follow next time, here are the places to check:

  • Google Analytics to check if I am getting a flood of legitimate traffic,
  • Apache access logs for any odd looking sources of traffic,
  • current process list to see where the CPU usage is coming from,
  • currently running MySQL queries for any reoccurring patterns.

Post retrieved by 35.215.163.46 using

The post Tracing a performance issue on my web server appeared first on Waking up in Geelong.

]]>
https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/feed/ 3 5385
My most viewed blog posts for 2014 https://wongm.com/2014/12/my-most-viewed-blog-posts-2014/ https://wongm.com/2014/12/my-most-viewed-blog-posts-2014/#respond Mon, 29 Dec 2014 20:30:39 +0000 http://wongm.com/?p=5376 I sat down the other evening and had a look at my top 20 most viewed blog posts for 2014. With only eight of my top 20 posts having been written this year, it goes to show the value that writing a "timeless" blog post can give.

The post My most viewed blog posts for 2014 appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
I sat down the other evening and had a look at my top 20 most viewed blog posts for 2014 – entries with an asterisk (*) beside them were published this year.

My top post was one I published early in the year, digging into the story behind a viral image of a fire hose crossing railway tracks – the post seems to be getting a lot of traffic even today. Next up was my 15 minutes of fame when my story about confronting a racist guy on the tram ended up in the news.

Heading further down the list we find a number of railway themed entries, some older posts about abandoned buildings in Melbourne, and three old faithfuls – fixing the power jack of a Samsung laptop, the history of National Mutual, and a how-to for fixing digital camera timestamps after daylight savings time changes.

With only eight of my top 20 posts having been written this year, it goes to show the value that writing a “timeless” blog post can give.

Post retrieved by 35.215.163.46 using

The post My most viewed blog posts for 2014 appeared first on Waking up in Geelong.

]]>
https://wongm.com/2014/12/my-most-viewed-blog-posts-2014/feed/ 0 5376
Rebuilding all of my websites https://wongm.com/2014/07/rebuilding-websites/ https://wongm.com/2014/07/rebuilding-websites/#comments Wed, 09 Jul 2014 21:30:07 +0000 http://wongm.com/?p=4877 I've had quite busy recently - on Thursday last week I discovered all of my web sites were offline, which resulted in me moving to a new hosting provider, and rebuilding every bit of content. So how did I do it?

 'Website Suspended' message from cPanel

The post Rebuilding all of my websites appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
I’ve had quite busy recently – on Thursday last week I discovered all of my web sites were offline, which resulted in me moving to a new hosting provider, and rebuilding every bit of content. So how did I do it?

Going offline

I first realised something was wrong when I discovered all of my web sites displaying the following ominous error message:

 'Website Suspended' message from cPanel

I checked my email, and I couldn’t find any notification from my hosting provider that my account was suspended – a pretty shit job from them!

However, I wasn’t exactly surprised, as over the past few years I’ve been receiving these automated emails from their system:

Your hosting account with username: [XYZ] has over the last few days averaged CPU usage that is in excess of your account allocation.

This could be caused by a number of factors, but is most likely to be due to a misconfigured installation of a 3rd party script, or by having too many features, modules or plugins enabled on your web site.

If you simply have a very busy or popular web site, you may need to upgrade your account which will give you a higher CPU allocation. Please contact our support team if you need help with this.

Until your usage average drops back below your CPU quota, your account will be throttled by our CPU monitoring software. If your account continues to use more CPU than what it is entitled to, you risk having your account suspended.

All up I was running about a dozen different web sites from my single shared web hosting account, and over the years I’ve have had to increase the amount of resources available to my account to deal with the increasing load.

Eventually I ended up on a ‘5 site’ package from my hosting provider, which they were charge me almost $300 a year to provide – a steep price, but I was too lazy to move everything to a new web host, so I just kept on paying it.

Having all of my sites go offline was enough of a push for me to move somewhere new!

What needed to be moved

All up my online presence consisted of a dozen different sites spread across a handful of domain names, running a mix of open source code and code I had written myself. With my original web host inaccessible, I had to rebuild everything from backups.

You do have backups don’t you?

Welcome to the western suburbs

The rebuild

I had been intending to move my web sites to a virtual private server (VPS) for a while, and having to rebuild everything from scratch was the perfect excuse to do so.

I ended up deciding to go with Digital Ocean – they offer low-ish prices, servers in a number of different locations around the world, fast provisioning of new accounts, and an easy migration path to a faster server if you ever need it.

After signing up to their bottom end VPS (512 MB RAM and a single core) I was able to get cracking on the rebuild – they emailed me the root password a minute later and I was in!

As I had a bare server with nothing installed, a lot of housekeeping needed to be done before I could start restoring my sites:

  • Swapping over the DNS records for my domains to my new host,
  • Locking down access to the server,
  • Setting up a swap file,
  • Installing Apache, MySQL and PHP on the server,
  • Creating virtual directories on the server for each separate web site,
  • Creating user accounts and empty databases in MySQL

I’ve only ever played around with Linux a little, but after 30 minutes I had an empty page appearing for each of my domain names.

To get my content back online, thankfully I had the following backups available to me:

  • I run three blogs on the open source WordPress software, so I could just install that from scratch to get a bare website back
  • My main photo gallery on the open source ZenPhoto software, so that was another internet download
  • Each blog and photo gallery uses a custom theme, of which I had backups on my local machine to re-upload
  • I keep a mirror of my WordPress uploads on my local machine, so I just had to reupload those to make the images work again
  • When I upload new photos to my gallery, I keep a copy of the web resolution version on my local machine which I was unable to reupload
  • Every night I have a cron job automatically emailing me a backup copy of my WordPress and ZenPhoto databases to me, so my blog posts and photo captions were safe
  • Some of my custom web code is available on GitHub, so a simple git pull got those sites back online

Unfortunately I ran into a few issues when restoring my backups (doesn’t everyone…):

  • My WordPress backup was from the day before, and somebody has posted a new comment that day, so it was lost
  • I had last mirrored my WordPress uploads about a week before the crash, so I was missing a handful of images
  • The last few months of database backups for Rail Geelong were only 1kb in size – it appears the MySQL backup job on my old web host was defective
  • Of the 32,000 photos I once had online, around 2,000 files were missing from the mirror I maintained on my local machine, and the rest of them were in a folder hierarchy that didn’t match that of the database

I wasn’t able to recover the lost comment, but I was able to chase up the missing WordPress uploads from other sources, and thankfully in the case of Rail Geelong my lack of regular updates meant that I only lost a few typographical corrections.

As for the 2,000 missing web resolution images, I still had the original high resolution images available on my computer, so my solution was incredible convoluted:

  • Move all of the images from the mirror in a single folder
  • Use SQL to generate a batch file to create the required folder structure
  • Use more SQL to generate a second batch file, this time to move images into the correct place in the older structure
  • Run a diff between the images that exist, and those that do not
  • Track down the 2,000 missing images in my collection of high resolution images, and create a web resolution version in the required location

Three hours after I started, I had my first win.

Unfortunately I found a number of niggling issues throughout the night.

By 2am I was seven hours in, and had managed to get another domain back online.

Eventually I called it quits at 4am, as I waited for my lethargic ADSL connection to push an elephant up a drinking straw.

I spent the weekend out and about so didn’t get much time to work on rebuilding my content – it wasn’t until the fourth day after my sites went down that I started to track down the 2,000 missing images from my photo gallery.

Thankfully I got a lucky break – on Monday afternoon I somehow regained access to my old web host, so I was able to download all of my missing images, as well as export an up-to-date version of the Rail Geelong database.

After a lot more stuffing around with file permissions and monitoring of memory usage, by Tuesday night it seems that I had finally rebuilt everything and running somewhat reliably!

What’s next

Plenty of people online seem to rave about replacing the Apache web server and standard PHP stack with Nginx and PHP-FPM to increase performance – it’s something I’ll have to try out when I get the time. However for the moment, at least I am back online!

Post retrieved by 35.215.163.46 using

The post Rebuilding all of my websites appeared first on Waking up in Geelong.

]]>
https://wongm.com/2014/07/rebuilding-websites/feed/ 12 4877
Another 15 minutes of fame https://wongm.com/2014/03/another-15-minutes-fame/ https://wongm.com/2014/03/another-15-minutes-fame/#comments Thu, 20 Mar 2014 20:30:08 +0000 http://wongm.com/?p=4536 It looks like my blog post yesterday about confronting a racist guy on the tram got a bit of attention, with almost 8,000 views of my blog during the day.

Traffic to my blog on March 20, 2014

The post Another 15 minutes of fame appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
It looks like my blog post yesterday about confronting a racist guy on the tram got a bit of attention, with almost 8,000 views of my blog during the day.

Traffic to my blog on March 20, 2014

It also got a run in The AgeYarra Trams investigates alleged racial abuse – which also got a number of views (at least until Malaysian Airlines flight MH370 took over the front page again).

Most read articles on The Age - afternoon of March 20, 2014

Media interest

My first mention in the media was in the Melbourne Express section of The Age at 7:34am. Journalist Angus Holland compiles the section, and he follows me on Twitter, which probably explains the fast response.

By 8am the flood of messages on Twitter started – first contact was another journalist from The Age, followed by Nine News dropping me a line at 9am. Soon after that I got a message from somebody else from Nine News, as well as a third journalist at The Age (turns out story leads at a newspaper get passed around during the course of the day, depending on which journalist is available).

Radio stations got into the act later on: around 11:30am somebody from 3AW wanted to get in touch, with Austereo (home of Fox and Triple M) dropping me a message at noon, and 774 ABC Melbourne a few hours later.

While Channel Nine was the first television station to contact me, it took a bit longer for the other two commercial stations to track me down: 7 News Melbourne didn’t message me until almost 2pm, while Channel 10 took a different tack – they got in touch via a little used email address and contact details tied to my domain name registration.

I wasn’t that keen to take up the radio and television interview requests, so I asked my friends on Facebook for a second opinion. One of them summed it up my concerns far better than I could have ever written:

You will have no control over what the message is, how the message is portrayed, and where it goes from there. The question is; what are you hoping to gain?

In the end, I only had a quick chat with Mex Cooper from The Age, and declined all of the television and radio interview requests.

Reactions

Following people’s reaction around the place, they fell into four groups:

  • Well done, good on you for saying something.
  • Public shaming of racist idiots makes my day.
  • I wouldn’t get involved, who knows what they will do you and anyone else nearby.
  • Defending the guy in question, stating that he is entitled to his opinion and I should but out.

In the case of the latter, there is a difference between whispering something ‘politically incorrect’ to your travel companion, and muttering it in a passive aggressive way to make the people around you feel uncomfortable and threatened.

People are entitled to believe whatever they feel like, but if you’re out in public sometims you just need to hold your tongue.

Further reading

In the mood to wade though pages of comments from Reddit users? Have fun.

Post retrieved by 35.215.163.46 using

The post Another 15 minutes of fame appeared first on Waking up in Geelong.

]]>
https://wongm.com/2014/03/another-15-minutes-fame/feed/ 2 4536