Monday, December 10. 2012
There's been a dramatic uptick in interest recently over my post on community alternatives for Internet access in locations where access has been cut off (either politically, as with Syria or Egypt, or environmentally, a la Sandy or Katrina). Coincidentally, I was at a conference today in Seattle where on oddball presentation on cellphone mesh networking was being made by Josh Thomas of Accuvant Labs. I say oddball because the conference was largely enterprise vendors pitching enterprise solutions to "Big IT" problems, and while mesh networking addresses a certain sort of problem, it's not one that enterprises put a lot of thought into or are likely to spend any time adopting.
When I was suggesting mesh alternatives for re-establishing local networks, I focused on router-based solutions such as Meraki access points. Fixed-point hardware remains vulnerable to a number of ills but it goes some way toward addressing the dearth of other realistic, amateur-deployable solutions in the field currently. Of course, in this day and age, the obvious question when it comes to establishing a real mesh network is, why not use cell phones? They are nearly ubiquitous, are by definition self-powered and portable, and each and every one of them represents a radio transceiver with respectable local range.
Until I saw the blurb in the conference seminar track list for SPAN, I had no idea anyone was actually working on such a thing.
So of course I went to check it out.
I don't even remember (and can't find on the various websites or presentation materials Thomas scattered about as he charged through the material) what SPAN stands for... Self-Powered Android Network? Except it also may sort of work on iPhones. More easily. On account of you don't have to root them first. At least not yet. It wasn't entirely clear to me, nor can I now find any evidence of an app now for the project in the iTunes store. But you can at least download the Android source code here, if you are equipped for diving into such things.
Although it didn't live up to the billing in the conference guide, Thomas's presentation was interesting and informative with respect to some of the challenges of mesh networking and the current state of the art in the field. Turns out that mobile transmitters introduce a variety of predictable, but hard to solve problems for networking protocols. And that's after you address what may be an even more insurmountable problem, which is that device manufacturers have not made it incredibly easy to re-purpose the handset radios in such a fashion.
The SPAN project's proximate goal was to fix some of the "easy" problems of putting a controllable toolset in the hands of other mesh developers, while providing some guidance toward cracking the "hard" problems. In broader terms, however, Thomas indicated that his more elemental purpose was to raise enough of a ruckus about phone-based mesh networking to get someone at Google to sit up and consider incorporating better tools--or at least removing further obstacles--from Android to make it easier to build mesh networks on the platform.
That larger goal is not only laudable, it's probably a requirement for the success of phone-based mesh networking. Mesh networks live or die by the number of available nodes covering the territory. The ubiquity of ownership of cell phones only serves that goal if the ease of enabling the network is within the grasp of the average user. Rooting a phone isn't difficult but it's outside the comfort range or amount of effort that most users are willing to extend. Perhaps in some particular scenarios--such as Syria--a sufficiently motivated group might accomplish this in sufficient numbers to make the effort of use, but in general, it's not going to happen.
SPAN takes an approach that enables just about any network-aware application by installing itself beneath the larger part of the Android network stack. This addresses a separate, but related ease-of-installation problem, which is that unless all the desired apps work with the mesh layer switched on the same way they do on a regular network connection, users are presented with another barrier; namely, the necessity of downloading special mesh-enabled versions of their apps (if such even exist) and remembering to use them when off the regular network.
It's a bit of a chicken and egg problem, but at this point, I think most users would be more capable of downloading and using separate apps than of rooting their phones. Neither approach is ideal, which is why getting Google on board may be the best possible solution.
The odds of accomplishing that may not be insurmountable, given the number of other mesh projects that are underway based on, or at least accessible to, Android devices. Thomas was kind enough to include links in his slide deck:
Each has its drawback and benefits. None are easy, dependable, or fully functional. However, exigencies through history have driven technological development, and the dramatic circumstances in disaster-afflicted or war-torn parts of the world, combined with the exploding number of mobile computing devices in the hands of average citizens, leaves me optimistic that these efforts will soon bear fruit.
Tuesday, October 23. 2012
If you're running Google Chrome or another Java-dependent program that is not compatible with Java 7 (the latest version), then Apple's most recent set of updates has severely broken your functionality. It took me a little digging to find the solution (largely because Apple does not seem to be interested in helping you solve the problem; their support article on the topic references a particular obscure program that was similarly effected and hit a historic core constituency, graphic designers, and doesn't mention at all that it also fixes the Chrome problem) so I'm passing it along here:
Following those steps worked for me to get the Chrome Java plug-in operating properly again using the previous version of Java on OS X 10.7.5. Your mileage may vary on other OS X versions.
This is particularly frustrating because Apple didn't actually remove the previous version of Java, the one that worked; they just broke it. While this is all undertaken under the guise of addressing security issues in Java (of which there are many, no question; if you can avoid using it, you should), if they weren't actually going to remove the allegedly dangerous software, it's hard to read this as something other than flipping the bird at their leading mobile competitor, Google. And, just as in the Maps imbroglio, they've done so by screwing their own users first and hardest.
Google is not utterly blameless here; their Chrome browser on OS X has not been updated to keep up with modern standards which would allow it to function with Java 7, and one wonders if this is their own shot back at Apple... again, using users as ammunition.
I'm not blind to business machinations and strategies, and ultimately you have to decide what customer base you are serving and focus on that, even if it is to the detriment of other potential users. The problem currently, if you are one of those users, is that the three-way battle at the top of the consumer technology space between Microsoft, Google, and Apple, leaves no real alternative for robust, intuitive, user-centric solutions that just work. They're so consumed with one another they've forgotten entirely about the user experience. It's turning into a real quandary for me when it comes to making recommendations to clients.
Tuesday, January 3. 2012
You may have noticed some pages on the site recently that have not been working properly; a message reading "An error has occurred while processing this directive" appeared at the top and some of the links didn't function correctly. Surprise, surprise... I just noticed this myself!
As is often the case when something is wrong with the site, this was the result of a cock-up at GoDaddy, where it is hosted. Other than the blog, this is not a particularly dynamic site... it pretty much just sits here and coughs up marketing text. Since that's not terribly demanding, technically speaking, it's not really worth hosting at a better service. The price I pay for this trade-off is one I have decided is affordable, but it is still sometimes annoying, which is that the company makes random changes in their server configuration which then break parts of the site that had been working just fine, untouched, for years and years and years.
But it turned out it was the ".js" that the server had decided suddenly to have problems with. I simply renamed the master file with a .html extension and changed the reference to '#include virtual="/includes/page_management.html" ' and everything started working again. This is only pedantically irritating (in that the file does not actually contain HTML and so is mis-labeled) and serves as a quick workaround... until the next random change.
Sunday, October 31. 2010
Microsoft, that is. I like to get my hands dirty from time to time and actually work with the technologies I disparage from time to time, the better to make informed recommendations; to that end, I have a few projects, mostly personal, that I handle most of the configuration for myself. One of these is something I happen to be messing with this morning, and involves a SQL 2K5 backend to which I have to import a bundle of data from an Excel spreadsheet.
This spreadsheet has a lot of different sheets within it, all with identical structure, which I want to import into a single table. I created the table ahead of time, then ran the built in Import/Export Wizard and laboriously manually selected that table as the destination for every listed spreadsheet. Then I hit "Next" expecting that this basic, basic, basic level of functionality, a process that need not have changed dramatically since the early days of Microsoft Access, a process that has helped win arguments for integrated installations using exclusively Microsoft products at every level ("they work together!") and has even gotten the company in hot water for anti-trust issues ("they work together, it's a conspiracy!"), will simply work.
Instead, I get this gem of an error message:
The same destination table name [database].[dbo].[table] is used more than once. All destination table names must be unique.
WHY? Why must they be unique? Is it unheard of that people might want to append several different sources of data into the same table?
I'm used to cryptic error messages from SQL Server, though, so after I get no hits on that one in particular, I back off and try importing just one of the sheets. The process kicks off, but then completes with another error:
The product level is insufficient for component "Source - Sheet"
While I'm grinding my teeth over that one, I notice a little "Copy" icon on the error message pop-up. I hover over it and it says "Copy message text."
"How clever!" I think, momentarily impressed. Highlighting text in modal error pop-ups has long been an impossibility, and I was glad that finally some common sense had been applied, probably by some junior programmer with time on his hands. If you're going to have a buggy product, you may as well at least make it easier for users to search for more information on solutions to the bugs.
I click the little button and happily open up Internet Explorer, then hit paste in the search box. It pastes in the absolutely worthlessly generic "TITLE: SQL Server Import and Export Wizard" ... the title of the pop-up, not the actual error message text. Even the bug-mitigation is buggy.
For that, at least, I could find some references; sure enough, bugs. The fixes and work arounds are both rather appalling. In the end, I think it's going to be faster and more reliable to just go dig out my old Access 97 CD and install it and do a two-step import, which is what I did with all the original data in the first place.
In the course of researching this problem I have run across a lot of commentary suggesting that SQL Server 2005 just plain sucks, but unfortunately (like most of the other commenters) I'm pretty well locked in to it at this point, on this project. But it certainly makes a powerful argument when older products are widely perceived as better than newer products that the company producing them is on the wrong path.
Wednesday, June 24. 2009
Green IT is one of the new industry buzzwords that has come along with a recession, an environmentally-minded president, and an increasing awareness that "green" is economical. More efficient allocation in resources leads to a better bottom line. Virtualization is making such allocation increasingly easier for businesses of all sizes to adopt. At the same time, recycling options for old hardware are expanding, and the necessity of procuring new hardware is diminishing, or at least extending out on a longer timeline.
There is an interesting article on MIT's Technology Review today about a project which has used the lightweight OS code base developed for the XO laptop to run older desktop PCs with better performance than would be possible for "modern" operating systems such as OS X or Windows.
A lot of people forget that the catch phrase for the green movement is just "Recycle"... it's actually "Reduce, Re-use, Recycle" and it's intended to be interpreted in that order. Start off using less, make better use of what you do have, and only then, if neither of those actions are applicable, should you actually recycle equipment.
Many organizations are locked into patterns of reliance on the latest and greatest operating systems, although the functionality of those systems is arguably equivalent to older software in many situations (even on the newest and fastest hardware). These organizations have bought into the industry-approved upgrade cycle and don't see the use for older hardware that can't run their standard software.
The thing is, though, much of that standard software can be run in some way, even using older hardware, if one can reconsider how and where it is being run and look at it from a strictly functional perspective.
The answer, of course, is to use a terminal services environment, with a single powerful server handling the heavy lifting of software operation, and the older machines as dumb terminals. This isn't perceived as practical for many organizations because they still insist on maintaining smart client operating systems on those desktops, even when their primary use is as terminals. From a certain perspective, this makes sense; if you are a Windows shop, and you want centralized management, then you reduce your costs by maintaining a single Windows version across your platforms. It's nonsensical, however, when those platforms are simply used to access web or terminal services; client management is only a serious issue when expensive and complex clients need to be maintained for desktop operations. A cheap dumb terminal is fire and forget: drop it in place, run it into the ground, replace it with another if it fries. There is nothing to get infected, stolen, or corrupted... what difference does client management make?
My old favorite for re-purposing older machines into dumb terminals was the PXES Universal Linux thin client boot disk. I see that the project has since gone mainstream, however, and has non-competes with other providers; they are now recommending a project called cult, which looks similar but which I haven't had a chance to try out yet. Cult, PXES, or similar open-source thin-client distributions allow old hardware to be repurposed as a terminal client as easily as popping a CDROM in the drive and turning on the power.
The old machine can be configured to boot directly to an RDP based terminal session; the user logs in and runs everything without even knowing the difference between the Windows login they have just made and the conventional, but more expensive, thick-client version they are already used to. In most cases, the boot time is faster than anything possible with a modern smart-client PC, even running on the oldest hardware. What a deal... use old hardware, improve performance and your user experience at the same time!
Of course, if you are willing to look at web-based alternatives to your Windows applications, it gets even easier. There's no need to set up a Terminal Server when someone else has already procured and configured the servers for you (as have all SaaS providers). In those cases, a lightweight, specialized distribution such as xPUD, Xubuntu, or Damn Small Linux can either be installed to an old, small hard drive, or booted just like cult from a CD or USB stick, putting the user at a rudimentary desktop with web access in a matter of seconds. All the heavy lifting other operating systems do is unnecessary when all your processing is happening on the other end of an Internet connection. Firefox is a safe, stable browser to run on a lightweight, impenetrable, disposable Linux platform to access those services. When Google finally releases Chrome for Linux, a browser specially built for running web-based applications, the case will be even easier to make.
So hold off on your trip to the local recycling facility; slide a fresh CD into the drive and simply recycle your machines in place.
(Page 1 of 4, totaling 16 entries) » next page
Syndicate This Blog