Archive for January, 2008

Broadband and the Digital Divide in Australia

So.. I’m currently moving house – It’s been on the cards for a while but I put it off pending a reply from a company I had been hoping to snare my ‘dream job’ with. The reply came, and it looks like I’ll be continuing my current status as a self-employed consultant/ PhD student for a while yet (dawgammit) , so it makes sense to make the move now.

I’ve found a fantastic place about 25kms out of Brisbane, at a spot called Mount Nebo (quite near to where my parents live, actually). The rent is good, the view is FANTASTIC and it’s a rural area, so I’ll be able to get back to my roots and have a few chooks etc 🙂

One big problem – if you can believe it – here in Australia, only 25kms out of a major city I cannot get broadband internet. To understand why this is, consider that the telecommunications utility (Telstra, often perhaps aptly renamed Hellstra) in this country was until recently Government owned, and is now a publicly owned company – so now the mighty dollar rules the roost.

It strikes me as a bit odd that the government would privatise a company that provides a service every bit as important as other utilities like water, electricity etc, without imposing conditions – like, for instance, the need to provide that service to folks where the population density might not quite make it a rapid return on investment – that is, after all, the reason that we have government owned utilities in the first place.

Actually, these days broadband internet is probably a more important utility than water and electricity – if you’ve got no water, you can always collect it from your roof. If you’ve got no electricity, you can always probably pay for solar panels. If you’ve got no broadband – well, sorry mate – you’ve just got no broadband. Hard luck.

This has resulted in the issue becoming a hot political potato. In fact, one of the major election promises of the incoming Rudd Labor government was that they would ensure that the entire population of Australia would have 8Mbp/s internet connections within several years if they were elected.

To this, the current American CEO of Telstra, Sol Trujillo, rejected the Rudd Labor Government’s idea of Telstra and its competitors all forming a jointly owned ‘new’ broadband network (probably FTTN – fibre to the node) as being some form of ‘kumbaya, holding hands theory – we are only going to participate in things we own and control’.

I must admit I can see his logic – CEO’s are paid the big bucks to maximise shareholder value. Opening up your monopoly owned exchanges and infrastructure to competitors for ‘the common good’ doesn’t cut it in this modern world of capitalism.

On the other hand, Telstra also blissfully enjoys their monopoly.. they argue I can get broadband – their wireless (3G) offering – Next-G. Yep, it’s fantastic – a whopping top speed in the area of 512Kbps, and only the low low price of $US150 per month for a maximum of 3GB download capacity.

I can use 3GB in my sleep – just 10kms down the road I can get 50GB a month for $50 at 5Mbps… so.. broadband at 100 times the market rate and 10% of the market speed is really only for those that have too much money or too few brain cells.. so. no broadband for me – I’ll have to buy a carrier pidgeon instead.

Alternatively.. well I am a comms engineer so.. I have the technology. I might just see if I can’t find someone with ADSL broadband in the valley below and offer to pay their monthly fees if I can rig up a yagi from their property and longshot the internet over an 802.11g/n link. It might be not strictly legal, but avoiding not-strictly-legal conduct is probably the reason that they never privatised the police force as well…

M

4 comments January 30th, 2008

How to Select Polygons which contain Points in ArcMap

Occasionally I write about things on this site that I haven’t been easily able to discover via a Google search – here’s one of them – I’ve been doing some GIS work for a client recently and I found myself befuddled by a simple problem.

When we’re faced with a situation where we have a large number of xy or lat long points in arcmap (for instance, when we’ve got a lot of yield monitor data) overlaid over the top of a polygon data set (for instance, block or paddock boundary shapefiles) in arcGis, what’s the best way to select only those paddocks (polygon shapefiles) which contain logged points?

Actually, it’s not too difficult. You have to use something called an ‘intersect’.

Select By Location in ArcGIS

Basically, in ArcGIS / ArcMap lingo, you’re selecting a feature (our block boundary polygons) that intercept or contain elements from another feature (our point data layer) – as shown below, with the selected features highlighted in cyan.

A GIS Intercept

To achieve this type of select, you need to go to the select menu, and choose ‘select by location’ – which will take you to the ‘select by location’ dialogue – (you can click the thumbnail to view full size).

Intersect Dialogue in ArcGis

In the dialogue, first you need to select the data layer that contains the polygons / blocks / features you’d like to select (select features FROM the following layer) and from there you need to chose ‘intersect’ in the drop-down followed by your points layer in the ‘features from this layer’ box.

After some processing, this will select all blocks / polygons / features that contain your points of interest.

Exporting a Selection to a new Shapefile in ArcGIS

Of course, now you’ll probably wish to export the selected features to a new layer – that process is described here – ‘Creating Subset Datasets in ArcGIS’

2 comments January 17th, 2008

Overclocking the Intel E8200 Core 2 Duo ‘Wolfdale’ Processor

Ok – so, if you read my last post about GIS systems and Sugarcane Yield Monitors you would be well aware that I’ve been labouring with a very processor intensive task called ‘Inverse Distance Weighted Yield Projections’. With over 2 million data points, the process takes many days… SO… in the interest of speeding it up, I recently decided to buy and overclock one of the new generation ‘Wolfdale’ Intel 45nm Core 2 Duo Processors – the E8200, which is the little brother to the new e8400 and e8500 series.

Firstly, the E8200 processor itself uses the same socket (775) as Intel has been using for quite some time now, but as I’ve been using AMD 64 processors up until now (socket 939), I needed to get a new motherboard as well – I chose the Gigabyte P35C-DS3R motherbard.

Meh – what the heck – if you’re going to go all out, why not get the works – so I topped off my purchase with some Kingston HyperX PC8500 DDR2 RAM as well – which is rated to 1066MhZ.

Add to that a nice little RAID 0 array of 2*500Gb 7.2K rpm hard drives for my data, and 1 10K RPM 160Gb WD Raptor, and you’re starting to shape up as a very speedy system.

Recommended Overclocking Settings for the E8200

Ok – so, the E8200 is rated at 2.66 GHz out of the box. With a little trial and error, I’ve been able to get it stable at a pretty decent 3.92GHz with the stock standard cooler. Here are the settings I used –

  • Ram Latency Settings – 5-5-5-10
  • RAM:FSB Ratio 1:1 (so effectively the Ram is matched to the CPU speed, running at around 490MHz – well, actually, 980 because it’s DDR – double data rate)
  • CPU Core voltage – 1.25 volts (this one was a suprise – many folks suggest you need to go as high as 1.5 – I didn’t)
  • RAM Voltage – 2.2V
  • CPU Clock 490MHz at the default multiplier of 8.

The system (on account of the low voltages) actually runs very cool – around 42 degrees Celsius (107 F) at idle, with a max temp of 62 degrees C (141 F) running Prime95, which is a CPU testing tool (amongst other things) – and that’s just using a stock CPU cooler with Arctic 5 heatsink compound.

Cool, in my mind, is good. If a processor runs cool, it’s usually a good sign that it is energy efficient – and it definitely means that less energy has to be spent keeping it cool which means lower electricity bills and a smaller environmental footprint – a good outcome all round.

So… contrary to the bad press I have heard elsewhere about the E8200 being not so overclockable, I’m absolutely stoked with these figures – but rumor has it that the E8400 and E8500 are getting prodigious numbers in the mid 4Ghz range, so they may be worthy of the additional investment, although the E8500 is a little overpriced for only an incremental increase.

For those of you interested – my particular CPU is the number 6 stepping – here’s a picture of the processor’s SuperPi numbers (just over 12 secs) –

Intel E8200 Overclocking

Some Additional E8200 Overclocking Tips

  1. To get very high CPU speeds like those noted above – YOU WILL NEED FAST RAM that can quite happily sit at ~ 960Mhz or above. A high quality DDR2 800Mhz stick MIGHT cut it, but I’d recommend going for a min of 1066MhZ RAM (PC8500), otherwise you’re going to find your overclock will be constrained to around 3.3Ghz.
  2. A New RAM technology is currently emerging called DDR3 – It offers speeds in excess of 1066MHz, but it is horrendously expensive – consider getting a motherboard (like the gigabyte Gigabyte P35C-DS3R I chose) that can handle BOTH DDR2 and DDR3 memory – socket 775 is going to be around for a while, and you might want to use DDR3 in the future.
  3. Don’t go overboard with the CPU voltage – some people are recommending as high as 1.5V, but I’ve found that past 1.3V you are mostly generating exponentially more heat for very little additional clock speed – it’s not worth it unless you’re nuts and like spending heaps of money on expensive cooling gear.

How did you go? Did you achieve the same speeds? Are you doing better with an E8400 or E8500? How much better? Got any questions? I’m really keen to hear from you – leave a comment below!

All the best –

doc

123 comments January 16th, 2008

How to Process, Filter and Clean Yield Monitor Data

Hi Everyone – long time no post, I realise.

Ok – so a bit of an update – I’ve spent the last little while learning a new language (VB.NET). It’s been a nice challenge, as most of my programming to date has been non-object oriented stuff – command line interfaces or totally embedded solutions usually written in ANSI C.

I’ve been up in North Queensland helping a company up there develop yield maps for their Sugar Cane harvesters (actually, I helped design the equipment that sits on the harvesters and does the yield monitoring, as well). The process of taking the raw data off the harvesters and converting it into yield maps is a fairly long and drawn out one. Firstly you need to filter the data (something I had been doing in Excel) to remove erroneous positions and obvious outliers.

Secondly you need to convert the latitudes and longitudes from the yield monitor into eastings and northings (A degree of latitude isn’t the same distance at every position on the earth) so that you can project the data onto the local industry GIS system (the projection used is called the Geodatic Datum of Australia – GDA Zone 55) which was a bit of a mission – I had to convert a long and convuluted mathematical process (about three pages of equations) into a neat little visual basic function.

Thirdly you need to tie the data back to actual weights to obtain a calibration for each machine (had a hybrid Excel / Access database for that job) and fourthly you need to have a way to automagically recalibrate the harvester as the season progresses and the pressures that the yield monitor uses to monitor cane throughput start to change (mostly they go up – wear and tear on the harvester).

I was planning to teach my clients how to do this all manually, but, since I had a hankering to learn a new language and wanted to make life easier for them, I asked a famous Googler what object oriented language he would use. He suggested visual basic, which suited me fine because it turns out that it’s quite similar to C in many ways.

So.. now.. a month later I know a new language and have a great little humungous larger and more comprehensive than I ever imagined software package.

The package condenses what took me about 7 or 8 days to do using excel spreadsheets etc into an algorithm – essentially distilling it down into a 5 minute batch process. The danger is that it will look TOO easy and my clients won’t appreciate the effort that’s gone into it 😀 – but, meh, the satisfaction of a job well done was worth the extra effort.

Once the yield data has been processed by the software, it then connects into a GIS system which converts the point yields into a smooth surface – a yield map. The one I use alot is called ArcGIS, and it makes life rather easy. Specifically, it uses a function called ‘Inverse Distance Weighted (IDW) Interpolation’ to draw the map. Basically, for every recorded point from the datalogger, it looks at 8-12 other nearby points and uses them to ‘guess’ what the yield may be between the points – and hence comes up with a nice smooth yield map.

That process can take a long time – for a years worth of data for 23 harvesters in Australia the process of calculating the IDW maps took me about 3 days with my Opteron 165 based system – which leads me into my next post – about my new computer (feeling very Geeky here).

17 comments January 16th, 2008


Featured Advertiser

Buy me a beer!

This sure is thirsty work - Here's your chance to buy me a beer :)

Links

Feeds

Posts by Month