PHP Archives - Waking up in Geelong https://wongm.com/tag/php/ Marcus Wong. Gunzel. Engineering geek. History nerd. Mon, 25 May 2015 02:30:34 +0000 en-AU hourly 1 https://wordpress.org/?v=6.7.1 23299142 Fixing my blog robot https://wongm.com/2015/05/wordpress-scheduled-post-issues/ https://wongm.com/2015/05/wordpress-scheduled-post-issues/#comments Sun, 24 May 2015 21:30:54 +0000 http://wongm.com/?p=6025 One thing you might not know about this site is that I don't actually wake up each morning and type up a new blog post - I actually write them ahead of time, and then they get pushed out to the site at a time I decide. Unfortunately this doesn't always work, such as what happened to me a few weeks ago.

XPT derailed outside Southern Cross - July 11, 2014

The post Fixing my blog robot appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
One thing you might not know about this site is that I don’t actually wake up each morning and type up a new blog post – I actually write them ahead of time, and set them up to be published at a future time. Unfortunately this doesn’t always work, such as what happened to me a few weeks ago.

XPT derailed outside Southern Cross - July 11, 2014

I use WordPress to host my various blog sites, and it has a feature called “scheduled posts” – set the time you want the post to go online, and in theory they’ll magically appear in the future, without any manual intervention.

For this magic to happen, WordPress has to regularly check what time it is, check if any posts are due to be published, and if so, publish them – a process that is triggered in two different ways:

  • run the check every time someone visits the site, or
  • run the check based on a cron job (scheduled task)

The first option is unreliable because it delays page load times, and you can’t count on people visiting a low traffic web site, so the second option is what I put in place when setting up my server.

I first encountered troubles with my scheduled posts in early April.

My initial theory was that a recently installed WordPress plugin was to blame, running at the same time as the scheduled post logic and slowing it down.

I removed the plugin, and scheduled posts on this site started to work again – I thought it was all fixed.

However, a few weeks later I discovered that new entries for my Hong Kong blog were missing in action.

I took a look the the config for my cron job, and it seemed to be correct.

*/2 * * * * curl http://example.com/wp-cron.php > /dev/null 2>&1

I hit the URL featured in the command, and it triggered the publication of a new blog post – so everything good on that front!

I then dug a bit deeper, and ran the curl command directly on my server.

user@server:~$ curl http://example.com/wp-cron.php
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved 
<a href="http://www.example.com/wp-cron.php">here</a>.
</p>
<hr>
<address>Apache Server at example.com Port 80</address>
</body></html>

Bingo – I had found my problem!

Turns out I had previous added a non-www to www redirect for the website in question via a new .htaccess rule – and by default curl doesn’t follow HTTP redirects.

The end result was my cron job hitting a URL, finding a redirect but not following it, resulting in the PHP code never being executed, and my future dated blog posts laying in limbo.

my fix was simple – update my cron job to hit the www. version of the URL, and since then, my future dated blog posts have all appeared on the days they were supposed to.

About the lead photo

The train in the lead photo is the Melbourne-Sydney XPT – on 11 July 2014 it derailed near North Melbourne Station due to a brand new but poorly designed turnout.

Post retrieved by 35.215.163.46 using

The post Fixing my blog robot appeared first on Waking up in Geelong.

]]>
https://wongm.com/2015/05/wordpress-scheduled-post-issues/feed/ 1 6025
Tracing a performance issue on my web server https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/ https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/#comments Mon, 05 Jan 2015 20:30:07 +0000 http://wongm.com/?p=5385 Managing my various web sites can be difficult at times, and my experience the other weekend was no different. My day started normally enough, as I logged onto my VPS and installed the latest security patches, then set to work on uploading new photos to my site. It was then I noticed my web site was taking minutes to load pages, not seconds, so I started to dig into the cause.

Server statistics logged by New Relic

The post Tracing a performance issue on my web server appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
Managing my various web sites can be difficult at times, and my experience the other weekend was no different. My day started normally enough, as I logged onto my VPS and installed the latest security patches, then set to work on uploading new photos to my site. It was then I noticed my web site was taking minutes to load pages, not seconds, so I started to dig into the cause.

Server statistics logged by New Relic

My initial setup

After I moved from shared web hosting, my collection of websites had been running on a $5 / month VPS from Digital Ocean – for that I got 1 CPU, 512 MB of RAM, and 20 GB of disk space. On top of that I used an out-of-the-box Ubuntu image, and installed Apache for the web server and MySQL for the database server.

I then installed a number of separate WordPress instances for my blogs, a few copies of Zenphoto to drive my different photo galleries, and then a mishmash of custom code for a number of other side projects. All of that is exposed via four different domain names, all of which sit behind the CloudFlare CDN to reduce the load on my server.

With some many web sites running on just 512 MB of RAM, performance was an issue! My first fix was to setup a 1 GB swap file to give some breathing room, which did stabilise the situation, but MySQL would still crash every few days when the server ran out of memory.

Swapping out Apache for the much less memory intensive Nginx web server is one way to fix the issue, but I didn’t have time for that. My solution – cron jobs to check the status of my server and restart the services as required!

The first script I came up with checked if the MySQL service was running, and start it up if it wasn’t.

service mysql status| grep 'mysql start/running' > /dev/null 2>&1
if [ $? != 0 ]
then
SUBJECT="MySQL service restarted $(date)"
service mysql status|mail -s "$SUBJECT" me@example.com
sudo service mysql start
fi

My second attempt negated the need for the first script, as it checked to see how much memory was free on my server, and restarted Apache if it was less than a given threshold.

#Minimum available memory limit, MB
THRESHOLD=300

available=$(free -m|awk '/^Swap:/{print $4}')
if [ $available -lt $THRESHOLD ]
then
SUBJECT="Apache service restarted $(date)"
service apache2 status|mail -s "$SUBJECT" me@example.com
sudo service apache2 restart
fi

Under normal load my cron job would restart Apache every day or so, but it did keep the database server up for the rest of the time.

Something is not right

After realising my web site was taking minutes to load pages, not seconds, I started to dig into my server logs. CPU load was hitting 100%, as was memory consumption, and my cron job was restarting Apache every few minutes – something wasn’t quite right!

My first avenue of investigation was Google Analytics – I wanted to find out if the spike in load was due to a flood of new traffic. While the Slashdot effect is a nice problem to have, but in my case it wasn’t to be – incoming traffic was normal.

I then took a look at my Apache access logs – they are split up by virtual host, so I had a number of log files to check out. The first suspicious entries I found were brute force attacks on my WordPress login pages – blocking those was simple, but the server load was still high.

Spending my way out

When looking to upgrade a system to handle more traffic, there are two completely different ways to go about it:

  • Be smart and optimise what you already have, to do more with the same resources
  • Throw more resources at the problem, and just ignore the cause

My server was already nearing the 20 GB disk space limitation set by Digital Ocean on their $5 / month VPS, so I figured an upgrade to next size VPS might fix my problem. Upgrading a Digital Ocean ‘droplet’ is simple job with their ‘Fast-Resize’ functionality – it takes about a minute, but in my case the option wasn’t available – I had to do it the hard way:

  1. shut down my server,
  2. create a snapshot of the stopped virtual machine,
  3. spin up a new Digital Ocean server,
  4. restore my snapshot to the new server,
  5. point CloudFlare from my old server IP address to the new one.

All up it took around 30 minutes to migrate from my old server to my new one, but at least with CloudFlare being my public facing DNS host, I didn’t have to wait hours for my new IP address to propagate across the internet!

Unfortunately, the extra resources didn’t fix my problem – CPU load was still through the roof.

Digging for the root cause

I first installed the htop process viewer on my server, and was able to see that MySQL was using up far much more CPU than normal – presumably my caching wasn’t working right, and my web pages were having to be generated with fresh database queries each time.

Next I fired up a MySQL console, and had a look at the currently running queries. Here I noticed a curious looking query over and over again:

SELECT @serachfield ...

A check of the code deployed to my server indicated that the query was thanks to the search function in Zenphoto, and when I went back into my Apache access logs, I eventually found the problem – a flood of hits on my photo gallery.

Apache web server access logs

Each line in the logs looked like the following:

108.162.250.234 – – [21/Dec/2014:04:32:03 -0500] “GET /page/search/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/beacon-3.newrelic.com HTTP/1.1” 404 2825 “https://railgallery.wongm.com/page/search/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/maintenance/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648)”

Each request was bound for “http://js-agent.newrelic.com/nr-476.min.js” or other files hosted at newrelic.com, and the user agent always appeared to be Internet Explorer 8.

New Relic is a software analytics tool I have installed on my server, and on seeing the multiple references to it in my access logs, I remembered that I had updated my version of the New Relic agent just before my performance issues had started. Had I found a bug in it?

The cause

A check of the HTML source of the page in question showed a link to js-agent.newrelic.com embedded in the page, so I came up with the following explanation for the load on my server:

  1. A user hits https://railgallery.wongm.com/page/search/SEARCH_TERM
  2. The New Relic Javascript file at http://js-agent.newrelic.com/nr-476.min.js somehow gets loaded as a relative path, and not an absolute one, which results in a request to:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/nr-476.min.js
  3. My server would then treat the above URL as valid, delivering a page, which then includes a relative link to js-agent.newrelic.com/nr-476.min.js a second time, which then results in a page request to this URL:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js
  4. And so on recursively:
    https://railgallery.wongm.com/page/search/SEARCH_TERM/js-agent.newrelic.com/js-agent.newrelic.com/js-agent.newrelic.com/nr-476.min.js

With the loop of recursive page calls for a new set of search results, each requiring a fresh database query, it was no wonder my database server was being hit so hard.

As an interim fix, I modified the Zenphoto code to ignore search terms that referenced New Relic, and then rolled back to the older version of the New Relic agent.

sudo apt-get remove newrelic-php5
sudo apt-get remove newrelic-php5-common
sudo apt-get remove newrelic-daemon
sudo apt-get autoremove newrelic-php5
sudo apt-get install newrelic-php5-common=4.15.0.74
sudo apt-get install newrelic-daemon=4.15.0.74
sudo apt-get install newrelic-php5=4.15.0.74

I then raised a support case for New Relic to look into my issue. In an attempt to reproduce the issue, I rolled forward with the current version of the New Relic agent to play ‘spot the difference’, but I couldn’t find any, and the errors also stayed away.

I’m writing this one off as a weird conflict between the updated New Relic agent running my server, and an old version of the browser monitor javascript file cached by a single remote user.

Conclusion

After working through my performance issues I now know more about what my web server is doing, and the extra RAM available following the upgrade means my horrible cron job hacks are no longer required to keep the lights on!

As for the steps I will follow next time, here are the places to check:

  • Google Analytics to check if I am getting a flood of legitimate traffic,
  • Apache access logs for any odd looking sources of traffic,
  • current process list to see where the CPU usage is coming from,
  • currently running MySQL queries for any reoccurring patterns.

Post retrieved by 35.215.163.46 using

The post Tracing a performance issue on my web server appeared first on Waking up in Geelong.

]]>
https://wongm.com/2015/01/tracing-performance-issue-apache-mysql-php-web-server/feed/ 3 5385
Rebuilding all of my websites https://wongm.com/2014/07/rebuilding-websites/ https://wongm.com/2014/07/rebuilding-websites/#comments Wed, 09 Jul 2014 21:30:07 +0000 http://wongm.com/?p=4877 I've had quite busy recently - on Thursday last week I discovered all of my web sites were offline, which resulted in me moving to a new hosting provider, and rebuilding every bit of content. So how did I do it?

 'Website Suspended' message from cPanel

The post Rebuilding all of my websites appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
I’ve had quite busy recently – on Thursday last week I discovered all of my web sites were offline, which resulted in me moving to a new hosting provider, and rebuilding every bit of content. So how did I do it?

Going offline

I first realised something was wrong when I discovered all of my web sites displaying the following ominous error message:

 'Website Suspended' message from cPanel

I checked my email, and I couldn’t find any notification from my hosting provider that my account was suspended – a pretty shit job from them!

However, I wasn’t exactly surprised, as over the past few years I’ve been receiving these automated emails from their system:

Your hosting account with username: [XYZ] has over the last few days averaged CPU usage that is in excess of your account allocation.

This could be caused by a number of factors, but is most likely to be due to a misconfigured installation of a 3rd party script, or by having too many features, modules or plugins enabled on your web site.

If you simply have a very busy or popular web site, you may need to upgrade your account which will give you a higher CPU allocation. Please contact our support team if you need help with this.

Until your usage average drops back below your CPU quota, your account will be throttled by our CPU monitoring software. If your account continues to use more CPU than what it is entitled to, you risk having your account suspended.

All up I was running about a dozen different web sites from my single shared web hosting account, and over the years I’ve have had to increase the amount of resources available to my account to deal with the increasing load.

Eventually I ended up on a ‘5 site’ package from my hosting provider, which they were charge me almost $300 a year to provide – a steep price, but I was too lazy to move everything to a new web host, so I just kept on paying it.

Having all of my sites go offline was enough of a push for me to move somewhere new!

What needed to be moved

All up my online presence consisted of a dozen different sites spread across a handful of domain names, running a mix of open source code and code I had written myself. With my original web host inaccessible, I had to rebuild everything from backups.

You do have backups don’t you?

Welcome to the western suburbs

The rebuild

I had been intending to move my web sites to a virtual private server (VPS) for a while, and having to rebuild everything from scratch was the perfect excuse to do so.

I ended up deciding to go with Digital Ocean – they offer low-ish prices, servers in a number of different locations around the world, fast provisioning of new accounts, and an easy migration path to a faster server if you ever need it.

After signing up to their bottom end VPS (512 MB RAM and a single core) I was able to get cracking on the rebuild – they emailed me the root password a minute later and I was in!

As I had a bare server with nothing installed, a lot of housekeeping needed to be done before I could start restoring my sites:

  • Swapping over the DNS records for my domains to my new host,
  • Locking down access to the server,
  • Setting up a swap file,
  • Installing Apache, MySQL and PHP on the server,
  • Creating virtual directories on the server for each separate web site,
  • Creating user accounts and empty databases in MySQL

I’ve only ever played around with Linux a little, but after 30 minutes I had an empty page appearing for each of my domain names.

To get my content back online, thankfully I had the following backups available to me:

  • I run three blogs on the open source WordPress software, so I could just install that from scratch to get a bare website back
  • My main photo gallery on the open source ZenPhoto software, so that was another internet download
  • Each blog and photo gallery uses a custom theme, of which I had backups on my local machine to re-upload
  • I keep a mirror of my WordPress uploads on my local machine, so I just had to reupload those to make the images work again
  • When I upload new photos to my gallery, I keep a copy of the web resolution version on my local machine which I was unable to reupload
  • Every night I have a cron job automatically emailing me a backup copy of my WordPress and ZenPhoto databases to me, so my blog posts and photo captions were safe
  • Some of my custom web code is available on GitHub, so a simple git pull got those sites back online

Unfortunately I ran into a few issues when restoring my backups (doesn’t everyone…):

  • My WordPress backup was from the day before, and somebody has posted a new comment that day, so it was lost
  • I had last mirrored my WordPress uploads about a week before the crash, so I was missing a handful of images
  • The last few months of database backups for Rail Geelong were only 1kb in size – it appears the MySQL backup job on my old web host was defective
  • Of the 32,000 photos I once had online, around 2,000 files were missing from the mirror I maintained on my local machine, and the rest of them were in a folder hierarchy that didn’t match that of the database

I wasn’t able to recover the lost comment, but I was able to chase up the missing WordPress uploads from other sources, and thankfully in the case of Rail Geelong my lack of regular updates meant that I only lost a few typographical corrections.

As for the 2,000 missing web resolution images, I still had the original high resolution images available on my computer, so my solution was incredible convoluted:

  • Move all of the images from the mirror in a single folder
  • Use SQL to generate a batch file to create the required folder structure
  • Use more SQL to generate a second batch file, this time to move images into the correct place in the older structure
  • Run a diff between the images that exist, and those that do not
  • Track down the 2,000 missing images in my collection of high resolution images, and create a web resolution version in the required location

Three hours after I started, I had my first win.

Unfortunately I found a number of niggling issues throughout the night.

By 2am I was seven hours in, and had managed to get another domain back online.

Eventually I called it quits at 4am, as I waited for my lethargic ADSL connection to push an elephant up a drinking straw.

I spent the weekend out and about so didn’t get much time to work on rebuilding my content – it wasn’t until the fourth day after my sites went down that I started to track down the 2,000 missing images from my photo gallery.

Thankfully I got a lucky break – on Monday afternoon I somehow regained access to my old web host, so I was able to download all of my missing images, as well as export an up-to-date version of the Rail Geelong database.

After a lot more stuffing around with file permissions and monitoring of memory usage, by Tuesday night it seems that I had finally rebuilt everything and running somewhat reliably!

What’s next

Plenty of people online seem to rave about replacing the Apache web server and standard PHP stack with Nginx and PHP-FPM to increase performance – it’s something I’ll have to try out when I get the time. However for the moment, at least I am back online!

Post retrieved by 35.215.163.46 using

The post Rebuilding all of my websites appeared first on Waking up in Geelong.

]]>
https://wongm.com/2014/07/rebuilding-websites/feed/ 12 4877
Testing the PTV transport data API using PHP https://wongm.com/2014/03/consuming-ptv-transport-data-api-from-php/ https://wongm.com/2014/03/consuming-ptv-transport-data-api-from-php/#comments Wed, 12 Mar 2014 09:49:13 +0000 http://wongm.com/?p=4502 Last week on March 7 Public Transport Victoria finally released something many Melbourne coders and app developers have been waiting for - a public API to query their timetable data. So how does one go about using it?

Passengers depart the train on platform 10

The post Testing the PTV transport data API using PHP appeared first on Waking up in Geelong.

Post retrieved by 35.215.163.46 using

]]>
Last week on March 7 Public Transport Victoria finally released something many Melbourne coders and app developers have been waiting for – a public API to query their timetable data.

Passengers depart the train on platform 10

Unfortunately the new API isn’t perfect – it doesn’t meet the Google-defined GTFS standard that many existing apps support, and no real-time arrival data is available – for trams you have to turn to TramTracker, and for trains and buses you’re on your own.

Despite the above limitations, I jumped into playing around with the API. The full set of documentation can be found at www.data.vic.gov.au, and after I made my API key request on the evening of March 6, I received a reply yesterday – March 11.

API key received from PTV

With my developer ID and security key received, the first thing to try out is the ‘healthcheck’ endpoint. Each API request needs to be digitally signed with a HMAC-SHA1 hash – this ensures that each request to the PTV servers is coming from a registered developer – and the healthcheck page allows a developer to test if their requests are being built up correctly.

For some people, working out the HMAC-SHA1 signature of each request is easy – the API documentation includes sample code in C#, Java and Objective-C to do just that. Unfortunately for me, my random hacking about is done in PHP, so I needed to roll my own version.

(jump to the end of my sample code)

My initial attempt generated something that looked like a HMAC-SHA1 signature, but it didn’t pass validation – I screwed something up! An hour or so of bashing my head against the wall followed, as I played around with Base64 encoding and byte arrays. but it still wasn’t working

I then tried a different tack – getting the sample C# code working in Powershell. For such an odd tactic it was simple enough to achieve, and I was able to get the code to run and generating a valid signature, which I used to make my first valid API hit. From there I was able to hack about my PHP code until it generated the same signature as the C# code generated – the bit I missed was the upper casing.

The end result from PTV – a small snippet of JSON.

Valid response received from the PTV API

Sample code

Here is the method I wrote to take an API endpoint URL (for example “/v2/healthcheck”) and build up a valid signed URL to request data from.

function generateURLWithDevIDAndKey($apiEndpoint, $developerId, $key)
{
	// append developer ID to API endpoint URL
	if (strpos($apiEndpoint, '?') > 0)
	{
		$apiEndpoint .= "&";
	}
	else
	{
		$apiEndpoint .= "?";
	}
	$apiEndpoint .= "devid=" . $developerId;
	
	// hash the endpoint URL
	$signature = strtoupper(hash_hmac("sha1", $apiEndpoint, $key, false));
	
	// add API endpoint, base URL and signature together
	return "http://timetableapi.ptv.vic.gov.au" . $apiEndpoint . "&signature=" . $signature;
}

And here is a sample usage of the above method. (note the sample data for key and developer ID!)

$key = "9c132d31-6a30-4cac-8d8b-8a1970834799"; // supplied by PTV
$developerId = 2; // supplied by PTV

$date = gmdate('Y-m-d\TH:i:s\Z');
$healthcheckEndpoint = "/v2/healthcheck?timestamp=" . $date;

$signedUrl = generateURLWithDevIDAndKey($healthcheckEndpoint, $developerId, $key);

Now all I need to do is build the wonderful app I have in mind…

Code in GitHub

You can find the above sample code on GitHub at https://github.com/wongm/ptv-api-php-test-harness – all you need to do is update the $key and $developerId values with your own credentials, and it should work on your own server.

Footnote

It looks like plenty of other people are interested in the API – here is a media release from Data.Vic, who host it on behalf of PTV.

The recent release of the PTV Timetable API has, not surprisingly, created the highest level of interest and activity for a single data release on Data.Vic.

In the 7 days since the release of the API data record we have seen:

– Over 3500 data record page views
– More than 700 downloads of the API document, and
– Close to 100 API key requests

Post retrieved by 35.215.163.46 using

The post Testing the PTV transport data API using PHP appeared first on Waking up in Geelong.

]]>
https://wongm.com/2014/03/consuming-ptv-transport-data-api-from-php/feed/ 22 4502