flish

flish

The New Basecamp Upgrade – Do We / Don’t We?

We use Basecamp, sorry Basecamp Classic, in the office for the majority of our project management needs, moreso I live within Omnifocus on the Mac and iPhone so make use of Spootnik to sync between Omnifocus and Basecamp which as I understand doesn’t currently work, so changes are a big deal, but these are bigger than most. The ‘upgrade path’ is more than that, it’s in effect testing and choosing a new product, except we don’t want a new product, if I’m evaluating a new product then I’ll probably be looking outside of Basecamp full stop.
On top of that, if we do evaluate there’s no turning back, so we have to work in parallel. I’ve not been motivated to even sign up for a free trial, complete apathy. So I googled, let the internet do my thinking for me, and found this, which pretty much sums up how risky a decision 37 signals (the makers of Basecamp) have made. Full credit’s made and follow the link to the full article, felt wrong to quote much more.

My conclusion, I’ll not even bother looking to see what New Basecamp is like, not now, not as an upgrade. I might however have a look to see if there’s a better suited product than Basecamp Classic, but it might not be from 37 Signals, or we may well stay where we are, quite happy.

The New Basecamp, New Coke, and New Decisions

There is so much to say about The New Basecamp that reviewing this release is going to take several posts. So, for starters let’s talk about the big picture decisions related to this major new release.

The Name

This week we got “The New” Basecamp and The New iPad. It seems to be an odd choice for both Basecamp and the iPad.

;

In theory, this works better for hardware. The 37Signals guys were quick to point out that Honda rolls out a new Civic every year and they don’t name them the Civic HD, Civic 4S, etc. You just get a new Civic. But the car industry has the decency to put a model year on it.

;

Apple’s been playing this game for a while. I own a white MacBook and 95% of the time the actual name of the model doesn’t matter. But when it does matter, I have to know that it is the 13-in Early 2009 MacBook. I suspect “the new iPad” will have the same issue. This is because this image to the left won’t help you much in 2014 when you are trying to get support and they need to know if you have an iPad 2, a 2012 iPad, or a 2013 iPad or whatever.

;

But with Basecamp, the name game feels even more strange. What we once knew as Basecamp is now Basecamp Classic. And this new thing, with a completely different feature set has assumed the Basecamp name and is generally prefaced with “the new” to differentiate it.

;

Why the name games? Did Ryan in The Office completely ruin the ability of software companies to name their product “two-dot-oh”?

;

The most straightforward answer seems to be this:

;

Unlike Fog Creek with Trello and FogBugz, 37Signals wanted to leverage the brand value of their existing product with their new, created-from-scratch product. Where Fog Creek has created a second project management tool to live along side their existing tool, 37Signals is maintaining the brand name with the new product. Think: New Coke. Oh wait, maybe that’s not the image they desired.

;

However, unlike most upgrades (excluding Apple’s treatment of video editing software) this “upgrade” actually removes several previously “key” features.

;

A major release like this is going to upset many users however you do it. If you position it as Basecamp 2.0 and you remove key features, well, users are going to freak out.

;

So, the team at 37Signals appears to be trying to walk a fine line. The new thing is new and different, but not the same product at all. So, you get the old thing renamed and a few Jedi mind tricks later… everybody is going to be okay. In theory. But this feels like a decision they will regret if for no other reason than they are going to get tired of talking about it.

;

No Auto Upgrade

Another complication is the decision to not auto-upgrade users to the new Basecamp. Instead, your current projects and accounts may continue to live on forever (or some version of forever) in Basecamp Classic. You may give the new Basecamp a whirl via a free trial and import your projects over, but you don’t have to do so.

;

Why would anybody stay in the old version of a product if the new version is available for essentially the same price? (Let’s ignore the issue about no longer supporting a “free” version in the new Basecamp.)

;

This isn’t a decision that was made by accident. There is a really good reason, but it’s going to frustrate a lot of folks. You see, the new Basecamp really is a brand new product. Completely new code, new features, new style, and all the things that go with a new product. Being a new product, the new Basecamp has a limited feature set.

;

Yes, there are new features that Basecamp (classic) never had. But there are features that are missing. Some are quite intentional (no private messages!) and some are more complex (no time tracking!).

;

Tangent: When Salesforce rolls out a new release (three times a year) you rarely lose key features. And if something is going to change, there is significant build up to the event that includes transition guides, the works. If this winter, Salesforce rolled out a release that say, removed the Opportunities object then all hell would break lose. You don’t just auto-upgrade users to a version of your application that does not include key functionality they have previously enjoyed.

;

And thus, 37Signals put themselves in an awkward situation. Or, more importantly, they put their users in an awkward situation. You can keep on paying the same price for eternity for the old tool that they are not likely to enhance ever again, or you can move to the new application with a different feature set.

;

Good luck on convincing your budget guy of option one and good luck of convincing your users of option two.

;

continue reading via The New Basecamp, New Coke, and New Decisions « Technical Support Is At The Deli.

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+

Rachel’s Parachute Jump for Butterwick Hospice

At the third time of asking, weather not nerves!, Rach managed her 10,000ft Tandem skydive at Peterlee Skydive Academy (huge thanks to them by the way, they were great), all in aid of Butterwick Hospice.  If you wish to sponsor her you can whilst the page is still open at – http://www.justgiving.com…

 

London 2012 – Torch Relay Stockton on Tees

The London 2012 Torch Relay in Stockton on Tees, the relay hit the North East, and Stockton on Tees on 17th June 2012, here’s our view down Greens Lane, near Ian Ramsey School

BTServer causing high CPU utilization (When wake from Sleep)

Symptoms are that your Mac wakes from sleep and CPU usage spikes to 100%, Activity Monitor show’s it the BTServer process using it all and you have to quit it.  This happens, 100% reproducible, when you’ve used the iOS Simulator at some point since the last reboot.

This isn’t a perfect solution, in fact it’s ruddy useless of the iPhone / iPad app you’re developing uses Bluetooth, but if it doesn’t then you can get away without BTServer, so we disable it.

Thanks to ‘Frankie’ via the Apple Support Forums

 

Re: BTServer causing high CPU utilization

Mar 4, 2012 5:51 AM (in response to Ryan Homer)

I have the same issue. My mac is 10.6.8 and I’m using XCode 4.2.

 

Just try:

 

Go to the Macintosh HD > Developer > Platforms > iPhoneSimulator.platform > Developer > SDKs > iPhoneSimulator5.0.sdk > System > Library > LaunchDaemons directory and then open the plist file from there. Locate the “Disabled” key and change its value from “NO” to “YES”.

You need to change permission in the file and also its parent folder in order to save the new permission of plist file before modifying in XCode.

 

Hope this help.

 

Frankie

via BTServer causing high CPU utilization: Apple Support Communities.

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+

Instant Messaging on Multiple Devices

Instant Messaging on Multiple Devices – struggling to believe this problem hasn’t been solved properly yet

Redesign Instant Messaging Platforms to be Used on Multiple Devices

POSTED AT JANUARY 29, 2011 // TECHNOLOGY

 

Most of the people who care about it have a smartphone and a data plan. They get their emails on the phone, browse the web and often also use instant messaging.

 

Software like Google Talk, Yahoo!Messenger, or Skype now exists for almost all devices and operating system: Windows, OS X, iPads, iPhones, Blackberry, Android, …

 

But what happens when you use it on multiple devices at the same time? Maybe you are on your PC, then get pulled in a meeting and bring your ipad, and then go out for a coffee and have your cellphone with you. Sounds familiar?

 

Today, either the platform sends the messages to all the devices or just to the last one from which you logged in. Annoying in both cases. It’s time to change.

 

The IM software should be written so that it knows what you are using. For example, as long as you type or use the mouse one can assume you are using the PC. If you start touching the ipad and not the PC, then switch to that. If the phone detect movements (GPS?) and there is no activity on the other devices, switch to that.

 

In addition, the chat logs should be kept on the sever, so that whatever device you use you always have them with you.

 

Privacy problems you say? Not really. You just need to encrypt the messages that you send and receive and use the same key on your devices, so that they can encode/decode them locally and on the server there is just and indecipherable series of characters. The Off-The-Record plugin, available for Audium and Pidgin, does just that.

 

It is not that hard. Who will get there first? A new startup or the IM companies?

 

via Redesign Instant Messaging Platforms to be Used on Multiple Devices.

Convert a Version 7 vmdk Disk to Version 4

This post is to highlight how you can convert a Version 7 vmdk disk to version 4.  Basically the background is that since Vsphere 4.0 the default Virtual Machine Hardware Version is version 7, under Esx or Esxi 3.5 it was virual machine version 5 (I think versions 5 and 6 are under VMWare workstation), so if you have any need to take a virtual machine from 4.0 / 4.1 / 5.0 etc backwards to a 3.5 host you’re (by default), knackered.
The easiest solution by the way is to use VMware convert, point at source and destination and it will sort it out, but sometimes this isn’t an option.  In my case I had the vmdk file (the small 1kb descriptor file and the 20GB *-flat.vmdk disk image it references), and nothing else.  Except that I knew my source was a version 7 vmware disk image.   I had copied the vmdk image files to the esxi 3.5 host but on creating a new virtual machine it basically pretended it couldn’t see the disk image, browsing the datastore to add the virtual disk showed nothing, no clue it even existed.  As always, this is what I did, take a backup first, your fault if you screw it up, etc, etc.

Using vi, or any other text editor, open up the vmdk file (the small one, *NOT* the big one), eg in my case I had;

/vmfs/volumes/49b13e24-6619f880-49fd-00151798cc79/Linuxx64 # ls -hlat
-rw------- 1 root root 20.0G Jan 9 16:11 Linux x64 Clone-flat.vmdk
-rw------- 1 root root 486 Jan 9 16:09 Linux x64 Clone.vmdk
drwxr-xr-t 1 root root 2.9k Jan 9 15:58 ..
drwxr-xr-x 1 root root 560 Jan 9 15:45 .
/vmfs/volumes/49b13e24-6619f880-49fd-00151798cc79/Linuxx64 #

It’s the highlighted orange one we want, the *-flat.vmsk is the actual disk image data.  Once opened you will have something like;

# Disk DescriptorFile
version=3
CID=7d8e12e8
parentCID=ffffffff
createType="vmfs"
# Extent description
RW 524288000 VMFS "Linux x64 Clone-flat.vmdk"
# The Disk Data Base
#DDB
ddb.toolsVersion = "7458"
ddb.adapterType = "lsilogic"
ddb.geometry.sectors = "63"
ddb.geometry.heads = "255"
ddb.geometry.cylinders = "32635"
ddb.uuid = "60 00 C2 93 7a c9 31 2b-8d 11 61 d1 30 66 5c 41"
ddb.virtualHWVersion = "7"

Note the highlighted values simply change to;

# Disk DescriptorFile
version=1
CID=7d8e12e8
parentCID=ffffffff
createType="vmfs"
# Extent description
RW 524288000 VMFS "Linux x64 Clone-flat.vmdk"
# The Disk Data Base
#DDB
ddb.toolsVersion = "7458"
ddb.adapterType = "lsilogic"
ddb.geometry.sectors = "63"
ddb.geometry.heads = "255"
ddb.geometry.cylinders = "32635"
ddb.uuid = "60 00 C2 93 7a c9 31 2b-8d 11 61 d1 30 66 5c 41"
ddb.virtualHWVersion = "4"

That simple, honest, edit in a text file and you’d done.  If you try and and this to Virtual Machine under Esx or Esxi 3.5 it will now see the vmdk and work just fine, in my case at least

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+

Buffalo Terastation Raid Recovery (PPC) under Linux (i386)

Buffalo Terastation Raid Recovery (PPC) under Linux (i386)

First off, a caveat, the data I had on my Terastation was not life or death, if this had failed the world would not have ended, in fact this had sat for 6 months un-attempted, that’s how un-important it was. Getting the data back was a ‘nice to have’, so this is not a safe or guaranteed process. If in doubt investigate using dd to backup partitions before atttempting – Don’t blame me! – But in essence Buffalo Terastation raid recovery is possible using an i386 generation donor box.

So, history, I had a Buffalo Terastation (model TS-1.0GL/R5) which is a Power PC (PPC) based NAS, fitted with 4x 250GB Western Digital drives. It went Pop 🙁

In my case it was configured to use all the drives as one large pot, although that’s not really important, underlying I knew it was basically software raid (mdraid) and the XFS file system.

First attempt was using a Centos 6 (32bit) box I had to hand, guess what, no XFS support under 32 bit, try again. So I used the Openfiler 2.3 install CD I had (intention was to present the recovered data as a NAS again – but never quite got that far). Importantly Openfiler supports XFS and software raid, and is in essence a minimised Linux distro with a web front end.

Once booted up (hasten to add boot / OS drive was another drive, I’ve literally added the Terastation drives as additional SATA drives and done nothing with them), I ran fdisk on the first Terastation drive and saw;

fdisk /dev/sda: 250.0 GB, 250059350016 bytes
 255 heads, 63 sectors/track, 30401 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
 /dev/sda1 1 48 385528+ 83 Linux
 /dev/sda2 49 65 136552+ 82 Linux swap / Solaris
 /dev/sda3 66 30378 243481141 83 Linux
 /dev/sda4 30378 30401 192779 83 Linux

Which was promising, in fact all 4 drives had identical partition structures. The first partition (/dev/sda1) is actually a raw xfs partition and is the / partition of the Terastation if you want to be nosy.

/dev/sda2 is swap, if I’m honest I have no idea what /dev/sda4 is (/boot maybe? I didn’t look), but /dev/sda3 (and /dev/sdb3, /dev/sdc3, and /dev/sds3 are the big ones with my data, and the software raid). There was lots of poking, prodding, reading, giving up as a lost cause at this point. I’m more than comfortable hacking about in Linux, it’s part of my day job, but Software Raid and mdadm are not my regular toys, and whilst a Mac fan my indoctrination didn’t come till the Intel days, so am not a Power PC expert. But, end result was discovering that in essence a software raid array, created on a PPC (Big Endian) device was not going to ‘appear’ under an i386 Linux (Little Endian) appliance, so give up … or google a little more.

At this point mdadm –examine was seeing nothing, there was no hint of a software raid, it might as well have been not there, but I knew it was. Turns out that you can change the byte order (Big Endian vs Little Endian – google it if you want), you need to do this on all components of the array, so basically;

 mdadm -A /dev/md0 --update=byteorder /dev/sda3
 mdadm -A /dev/md0 --update=byteorder /dev/sdb3
 mdadm -A /dev/md0 --update=byteorder /dev/sdc3
 mdadm -A /dev/md0 --update=byteorder /dev/sdd3

At this point we can ask mdadm to have a look, and you should get something similar to;

mdadm --examine /dev/sdb3
 /dev/sdb3:
 Magic : a92b4efc
 Version : 00.90.02
 UUID : 39b220be:f9b75e99:849a28e5:f10c758f
 Creation Time : Sat Aug 29 11:09:31 2009
 Raid Level : linear
 Raid Devices : 4
 Total Devices : 4
 Preferred Minor : 1
Update Time : Sun Mar 6 17:05:45 2011
 State : active
 Active Devices : 4
 Working Devices : 4
 Failed Devices : 0
 Spare Devices : 0
 Checksum : f8cc653a - correct
 Events : 0.8
Rounding : 64K
Number Major Minor RaidDevice State
 this 3 8 51 3 active sync /dev/sdd3
 0 0 8 3 0 active sync /dev/sda3
 1 1 8 19 1 active sync /dev/sdb3
 2 2 8 35 2 active sync /dev/sdc3
 3 3 8 51 3 active sync /dev/sdd3

Bingo, we’re now acknowledging the raid array components, so stich back together with;

mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
 mdadm: /dev/md0 has been started with 4 drives.

Then give it somewhere to mount (but read only for now, to be safe);

mkdir /mount/md0
 mount -o ro -t xfs /dev/md0 /mount/md0
 mount: /dev/md0: can't read superblock

Cock 🙁 At this point you may be laughing and busy copying off your data, but not for me. Am not an XFS filesystem expert, but some more googling suggested some options, and lots of warnings that this may further corrupt your data remember, I wasn’t too concerned if it ended badly, so you’ve been warned, at least twice!

xfs_repair /dev/md0
 Phase 1 - find and verify superblock...
 Phase 2 - using internal log
 - zero log...
 ERROR: The filesystem has valuable metadata changes in a log which needs to
 be replayed. Mount the filesystem to replay the log, and unmount it before
 re-running xfs_repair. If you are unable to mount the filesystem, then use
 the -L option to destroy the log and attempt a repair.
 Note that destroying the log may cause corruption -- please attempt a mount
 of the filesystem before doing this.

You have now been warned three times! Pressing on …

xfs_repair -L /dev/md0

… followed by similar output to above, confirming that the log had been zapped and recreated, but nothing else fatal, then to be sure;

xfs_check /dev/md0

Again nothing, so, try again;

mount -o ro -t xfs /dev/md0 /mount/md0

Bingo!

Openfiler saw the software raid array we’ve just re-created, but I’m assuming that because it relies mostly on LVM volume groups on top of physical disks / arrays it couldn’t do anything for me with an array of existing data, so I just fired up an SFTP client and copied everything off.

Hope this helps, please heed my warnings if your data is critical, but then if it were critical you’d have a backup, right?

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+

Bupa Great North Run – 2011 – As run by Me

My run chart of the Bupa Great North Run in 2011, although am pretty sure that long downhill in the middle never existed. Can’t embed the map to click the External link;
Link – Great North Run 2011 by Flish at RunKeeper

Official Chip Time was 2hrs 16 minutes and 14 seconds and was run in aid of Butterwick Hospice, sponsorship page still available at http://www.justgiving.com…

Official Chip Timing:

Number: 45037
Chip Time: 02:16:14 Position: 21,802

I Found This Useful: Error message in OSX Console – Cant create kext cache under / – owner not root

Error message in OSX Console – Cant create kext cache under / – owner not root

I was getting an error in the console:

com.apple.kextd[10]: Cant create kext cache under / - owner not root

This can be fixed by doing the following command:

sudo chown root:admin /

Note, you will almost certainly be prompted for a password, normally this is your Mac login password, but it may be a specific user with Administrator privileges’ password you need to enter

via I Found This Useful: Error message in OSX Console – Cant create kext cache under / – owner not root.

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+

Monitor Wi-Fi with Lion’s hidden tool | Macworld

Monitor Wi-Fi with Lion’s hidden tool 

by Lex Friedman, Macworld.com   Sep 5, 2011 2:00 pm

Hints reader nathanator11 discovered that Lion includes a handy app that provides all sorts of diagnostic information surrounding your wireless network. Much of the information the software generates gets pretty technical, but even Wi-Fi novices may find some of the details that the utility aggregates useful.

Wi-Fi Diagnostics is tucked away in the /System/Library/CoreServices folder. To get there, I pressed Shift-Command-G in the Finder (the equivalent of going to the Go menu and choosing Go to Folder), and then typed in the /System/Library/CoreServices path and pressed Return. Once in the folder, I found Wi-Fi Diagnostics and double-clicked it. Alternatively, you could launch the Terminal and type open “/System/Library/CoreServices/Wi-Fi Diagnostics.app”, and then press Return.

However you find and launch it, Wi-Fi Diagnostics gives you four options: Monitor Performance (which shows you signal strength, noise level, transmit power, and data rate); Record Events (which can keep a log of network happenings); Capture Raw Frames (which records everything coming and going on your Mac’s wireless connection); and Turn on Debug Logs.

 

 

If you’re at all interested in what’s going on with your Mac’s Wi-Fi connection or your wireless network, Wi-Fi Diagnostics is freely included with your copy of Lion, and you can’t break anything by poking around the app—so enjoy!

via Monitor Wi-Fi with Lion’s hidden tool | Networking & Wireless | Mac OS X Hints | Macworld.

Andy Flisher is a Software Developer based in the North East of England specialising in cross platform development. Mobile Development experience includes Windows Phone, Android, and iPhone Apps. Desktop Software Development includes bespoke Windows, Linux, and Mac Applications. Web Development Skills include PHP, Perl, Python, ASP (Classic and .NET) – Andy Flisher on Google+