A short brain dump on automated testing

Sometimes you come across a blogger who’s so thoroughly covered a topic that you can just drill down through years of his writing and keep finding awesome stuff. Alister Scott‘s blog “watirmellon” is such a source. Here’s some highlights…

Make sure to read through his Automated Testing slide deck!

http://watirmelon.com/2012/01/31/introducing-the-software-testing-ice-cream-cone/

“I propose we rename QA to mean Quality Advocate … Whilst their responsibilities include testing, they aren’t limited to just that. They work closely with other team members to build quality in, whether that be though clear, consistent acceptance criteria, ensuring adequate automated test coverage at the unit/integration level, asking for dev box walkthrough, or encouraging collaboration/discussion about doing better testing within the team.

http://watirmelon.com/2013/02/25/are-software-testers-the-gatekeepers-or-guardians-of-quality/

A:  User stories aren’t ‘done’ until you’ve tested each of them, which means you get to provide information to the Product Owner about each of them. You define the quality bar and you work closely with your team and product owner to strive for it.

B: Whilst you think you may define the quality of the system, it’s actually the development team as a whole that does that. Everyone is under pressure to deliver and if you act like an unreasonable gatekeeper of quality, you’ll quickly gain enemies or have people simply go around or above you.

http://watirmelon.com/2013/05/08/should-you-use-the-givenwhenthen-format-to-specify-automated-acceptance-tests/

A: Given/When/Then format provides a high level domain specific language to specify the intention of automated acceptance tests (very easily transferred from a user story) separate to the implementation of your automated acceptance tests. This separation allows changing the test from testing the UI to testing an API without changing the intention of the test.

B:  One of the selling points of writing Given/When/Then tests is that they are readable by business. But in reality, business never read your Given/When/Then specifications, so it makes no sense to invest in them.

“Quick wins give you breathing space to develop a good solution.”

http://watirmelon.com/2009/10/01/not-ruining-your-test-automation-strategy/

Alister: Automated testing through non-GUI means is smart, but sometimes you have no choice. I have made automated testing through the GUI reliable and maintainable, but it required skill on my part. Automated GUI tests can be used to deliberately show discrepancies in the GUI, often highlighting unintended GUI changes. It’s generally not a good idea to completely write something off because you may have seen it done poorly yourself. It’s like saying Agile is wrong because you worked somewhere where Agile was done poorly.

Bob:  My beef is not with GUI testing tools per se. Rather it is with teams that test their entire app through the GUI. You are correct in that sometimes you have no choice. In such cases very careful test construction can mitigate the fragility problem. But no amount of care can come close to competing with an approach that runs the majority of tests through the API.

QA? Project Management? …or just Paradevs?

http://watirmelon.com/2013/01/31/so-what-exactly-is-a-paradev/

A couple of years ago now, just after I started at ThoughtWorks, I read a tweet from a fellow ThoughtWorks developer here in Brisbane along the lines of “the paradevs at work enjoyed my lunchtime session on networking”. My ears pricked: “what’s a paradev?” I asked. “It’s someone who helps the developers develop” she replied. “Oh” I thought.

http://watirmelon.com/2013/05/07/do-you-even-need-a-software-tester-on-your-agile-team/

A: If you don’t particularly care about quality, have good production monitoring, and can get internal engineers and major partners to do your QA then you may get away with not having a tester on your agile team.

B: Software testers provide a unique questioning perspective which is critical to finding problems before go-live. Even with solid automated testing in place: nothing can replicate the human eye and human judgement.

http://watirmelon.com/2013/04/14/which-is-better-manual-or-automated-testing/

A:  Even when automating a test scenario, you have to manually test it at least once anyway to automate it, so automated testing can’t be done without manual testing.

B: Because the automated tests are explicit, they also execute consistently as they don’t get tired and/or lazy like us humans.  Automated tests also allow you to test things that aren’t manually possible, for example, ‘what if I processed ten transactions simultaneously’.

http://watirmelon.com/2013/04/14/test-in-production/

The key to testing changes as soon as they hit production is to have real time, continuous real user experience monitoring.  More comprehensive automated acceptance tests can be written in a non-destructive style that means they can be run in production. This means that these can be run immediately following a fresh production deployment, and as feedback about the tests is received, any issues can be remedied immediately into production and tested again.

http://watirmelon.com/2013/04/13/choosing-a-language-for-your-automated-acceptance-tests/

A: Automated acceptance tests shouldn’t be developed in isolation, so having these written in the same language as your application (usually C# or Java) will ensure that the programmers are fully engaged and will maximize the liklihood of having these tests maintained alongside your application code.

B: If your software testers are responsible for writing and maintaining your automated acceptance tests then it makes sense to allow the testers to write in dynamic scripting languages which are popular with testers as they are lightweight to install and easy to learn, have no licensing costs allowing an unlimited number of build agents to run these tests as part of continuous integration. As testers develop their skills in these languages they can quickly create throwaway scripts to perform repetitive setup tasks required for their story or exploratory testing: such as creating multiple records or rebuilding a database.

http://watirmelon.com/2013/04/13/who-should-write-your-automated-acceptance-tests/

A: The benefits of having the programmers in your team writing and maintaining these tests is that they will be maintained and executed as soon as any change occurs, so they’ll be kept more up to date and less likely to go stale.

B: Software testers are particularly good at building automated acceptance tests that cover an end-to-end process in the system; often called user journeys. This is because they have a good understanding of the journey whereas a programmer may only understand the logic behind a particular screen. Testers should be involved in writing this style of  acceptance tests so they are representative of real usage.

http://watirmelon.com/2013/03/10/is-test-management-wrong/

Now, each agile team is responsible for its own quality, the tester advocates quality as accurate acceptance criteria, unit testing, automated acceptance testing, story testing and exploratory testing. These activities aren’t managed in a test management tool, but against each user story in a lightweight story management tool (such as Trello or Mingle). The tester is responsible for managing his/her own testing. Step by Step test cases (such as those in Quality Center) are no longer needed as each user story has acceptance criteria, and each team writes automated acceptance tests written for functionality they develop which acts as both automated regression tests and living documentation.

http://watirmelon.com/2013/05/20/do-you-need-an-automated-acceptance-testing-framework/

A: If you’re starting off with automated acceptance testing and you don’t have some kind of framework, eg, page object models, in place then you can quickly develop a mess.

B: Over-engineered automated acceptance test frameworks are harmful for a team as they dictate certain ways of doing things which means the team can be less efficient in developing what they need to deliver.

http://pragprog.com/book/achbd/the-rspec-book

Caution! You’ve fallen for a trap. You’ve picked up this book thinking it was about RSpec. Fortunately, you decided to read the foreword. Good! That gives me the opportunity to tell you about the mistake you just made and possibly save you from an unexpected fate. -Uncle Bob

http://watirmelon.com/2011/05/31/an-automated-testing-journey/

MVC Route Testing Boilerplate with JustMock

Back in the day, Phil Haack wrote up a good guide for unit testing the routes created in ASP.NET MVC. I just set up these tests in a new MVC 4 project with JustMock as my mocking framework, so I wanted to put up my small modifications of his code to work there.

First we have a few helper methods that mock up an HttpContextBase and allow the routes to be rendered into RouteData.

[code language=”csharp”]
public static void AssertRoute(RouteCollection routes, string url,
Dictionary<string, string> expectations)
{
var httpContextMock = Mock.Create<HttpContextBase>();
Mock.Arrange(() => httpContextMock.Request.AppRelativeCurrentExecutionFilePath)
.Returns(url);

RouteData routeData = routes.GetRouteData(httpContextMock);
Assert.IsNotNull(routeData, "Should have found the route");

foreach (string property in expectations.Keys)
{
Assert.IsTrue(string.Equals(expectations[property],
routeData.Values[property].ToString(),
StringComparison.OrdinalIgnoreCase)
, string.Format("Expected ‘{0}’, not ‘{1}’ for ‘{2}’.",
expectations[property], routeData.Values[property].ToString(), property));
}
}

public static void AssertIgnoreRoute(RouteCollection routes, string url)
{
var httpContextMock = Mock.Create<HttpContextBase>();
Mock.Arrange(() => httpContextMock.Request.AppRelativeCurrentExecutionFilePath)
.Returns(url);

RouteData routeData = routes.GetRouteData(httpContextMock);
Assert.IsNotNull(routeData, "Should have found the route");
Assert.IsInstanceOf<StopRoutingHandler>(routeData.RouteHandler);
}
[/code]

Dests for the default route and the basic controller/action route.
[code language=”csharp”]
[Test]
public void RegisterRoutes_AddsDefaultRoute()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
var expectations = new Dictionary<string, string>();
expectations.Add("controller", "home");
expectations.Add("action", "index");
expectations.Add("id", "");
AssertRoute(collection, "~/", expectations);
}

[Test]
public void RegisterRoutes_AddsControllerActionIdRoute()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
var expectations = new Dictionary<string, string>();
expectations.Add("controller", "home");
expectations.Add("action", "index");
expectations.Add("id", "1");
AssertRoute(collection, "~/Home/Index/1", expectations);
}
[/code]

…and an easy test to make sure that axd handlers are not routed with the routing engine.
[code language=”csharp”]
[Test]
public void RegisterRoutes_IgnoresAxd()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
AssertIgnoreRoute(collection, "handler.axd/somestuffhere");
}
[/code]

Curation

You may not know that I have a couple of iPhone apps which I’ve submitted to the Apple store, and which have not been approved because of a lack of polish or focused value to the people who would buy them. It’s a pain for me, but in the end I have to be thankful that Apple takes an interest in the quality of what’s on the store.

I was just browsing the Windows 8 App Store and found two “Top Paid” apps, one called “Word++” and one called “Windows Media Player 9”, neither one from Microsoft, but each looking as close as they can to being an actual Microsoft app. I’m not impressed that Microsoft can’t keep that kind of fraud-ware out of their store.

Chrome Bad!

Chrome Bad!

I’m so sad that I’m increasingly of the opinion that Google does not have my interests at heart. Really I don’t mind if they don’t care about *me*…it’s that they are losing the values that made them great to start with.

The End of the Password

I seriously cannot wait for us to be done with the password! The idea of a human-remembered secret to protect our access hasn’t really been a safe or secure one since people started plugging phones into computers. Hopefully we’re starting to see some action on this front, with Michael Barret (CISO of Paypal) starting an alliance to “obliterate user IDs and passwords and PINs from the face of the planet.

The FIDO Alliance seems to be interested in taking a set of biometrics, USB storage, and TPM embedded hardware and using it to provide secure authentication across the web. Certainly this is an idea who’s time is nearly here, with easy to use services providing open two-factor authentication for applications, and the advent of identity federation services.

We also need it very badly, a large proportion of the high-profile security breaches reported on in the press both are caused by and result in password disclosure. Disclosed passwords, even the ones stored in one-way-hashes are getting easier to decode (brute-force). It’s also easy to ‘social-engineer’ your way into someones passworded accounts and completely derail their life. The current best practices for password management systems were defined in 1985, and are still implemented poorly and incompletely, we can do better. Passwords also create a responsibility on engineering groups to store them securely so that a compromised password on one system doesn’t lead to many compromised systems (algorithms like scrypt, bcrypt and PBKDF2 with high iteration counts can do the trick [1] [2]).

But even with the best password authentication system we can design, we are still stuck with a link between the keyboard and the user’s memory as the essential component of assuring who’s trying to gain access. Passwords should be complex, unique, and hard to lose. This is not a job for a person’s scattered memory, and the combination of better identity tools, including biometrics and mobile devices can bring us beyond the idea of ‘accounts’ with ‘usernames’ and ‘passwords’ and instead to a more serious idea of identity.

Where I go for my tech and development news fix

A friend recently asked me what blogs to follow for learning more about software engineering, and I gave him this list. I thought I’d share it here.

Udi Dahan – The Software Simplist – Udi is one of the best people writing on the subject of large system architecture in the enterprise. I get a lot of value just trying to understand the words he uses, let alone his ideas.

Ayende @ Rahien – Ayende Rahien aka Oren Eini is a fantastic coder, responsible for NHibernate, Rhino Mocks, Entity Framework Profiler, and RavenDb. His daily posts follow the things he’s learning and working on as well as broader insights into coding in the .NET world.

Scott Hanselman – Hillsboro resident Scott Hanselman is one of the celebrities of the .NET world. Currently he works in the ASP.NET/Azure team at Microsoft and constantly works to Open Source the frameworks he works on.

Alvin Ashcraft’s Morning Dew – This is my ‘go to’ resource for everything else that happens in .NET land. Alvin collects the best blog posts of the day and provides you a quick list of things to look at. Much better than subscribing to dozens of blogs.

Knock Me Out – Ryan Neimeyer writes about the various ways to effectively use Knockout.js in your projects, how to solve stick problems, and improve performance of your pages.

Steven Sanderson’s blog  – Author of one of the better books on ASP.NET MVC, as well as the Knockout.js library, Sanderson provides insights on web tech.

Techmeme – News of the technology world, sorted and grouped by lead story. Find the best article on the news of the day without hitting all the news sites.

Hacker News – Links and discussion from the world of venture-funded software startups.

C# on Reddit – Discussion on the C# world.

Krebs on Security – Automated network hacking devices, zero day exploits, and ATM skimmers, hot security stories from a information security researcher.

Seth’s Blog – Wisdom and insight from one of the wizards in the white hat marketing world. Learn to be a better person and a better contributor in your work.

Schneier on Security – The Chuck Norris of information security. Broad insights into the philosophy and future of secure systems.

Entity Framework Migrations and Database Initialization vs. MiniProfiler

The Problem:

If MiniProfiler is initialized before our Entity Framework database initialization strategies execute, the initialization fails with an error about a missing migration table.

If the Entity Framework database initialization strategies execute first, access to entities fails with a type casting exception as the MiniProfiler DbConnection is tried to be forced into a SqlConnection variable (in an internal generic).

The Cause:

When MiniProfiler initializes, it uses reflection to retrieve a collection of database providers from a private static field in System.Data.Common.DbProviderFactories. It then rewrites this list with MiniProfiler shim providers to replace the native providers. This allows MiniProfiler to intercept any calls to the database silently.

When Entity Framework initializes, it starts to compile the data models and create cached initialized databases stored in System.Data.Entity.Internal.LazyInternalContext inside some private static fields. Once these are created, queries against the DbContext use the cached models and databases which are internally typed to use the providers that existed at initialization time.

When the Entity Framework database initialization strategy runs, it needs access to the bare, native Sql provider, not the MiniProfiler shim, in order to correctly generate the SQL to create tables. But once these calls to the native provider are made, the native provider is cached into LazyInternalContext and we can no longer inject the MiniProfiler shims without runtime failures.

My Solution:

Access the private collections inside System.Data.Entity.Internal.LazyInternalContext and clear out the cached compiled models and initialized databases.

If I perform this purge between the operation of the EF database initialization strategies and the initialization of MiniProfiler, the MiniProfiler shims can then be inserted without causing later runtime failures.

Code:

This code did the trick for me:
[code language=”csharp”]Type type = typeof(DbContext).Assembly.GetType("System.Data.Entity.Internal.LazyInternalContext");bject concurrentDictionary = (type.GetField("InitializedDatabases", BindingFlags.NonPublic | BindingFlags.Static)).GetValue(null);
var initializedDatabaseCache = (IDictionary)concurrentDictionary;
if (initializedDatabaseCache != null) initializedDatabaseCache.Clear();
object concurrentDictionary2 = (type.GetField("CachedModels", BindingFlags.NonPublic | BindingFlags.Static)).GetValue(null);
var modelsCache = (IDictionary)concurrentDictionary2;
if (modelsCache != null) modelsCache.Clear();[/code]

Warning:

It appears that the names of the internal fields in LazyInternalContext change between versions of EF, so you may need to modify this code to work with the exact version of EF that you include in your project.

Ridiculous Cellular Internet Directionality

I’ve taken to testing my cellular phone in a 360 degree rotation when I camp out at a restaurant with my laptop. I’ve discovered that I can get a 4x speed improvement by pointing it the right way.

The punchline is that the best upstream and best downstream bandwidth seems come from different directions.

Update: Whether the phone is face down or face up also affects the result. Ping times also vary a lot by directionality. I had best results with the phone face up, and the bottom pointed in the direction of the least signal blockage (my guess).

Disabling Dell Laptop ‘NUM LOCK: ON’ and ‘NUM LOCK: OFF’ messages

I have a nice Dell Precision desktop replacement laptop, and a while back, I noticed that it had a cool software driver feature. When I enabled or disabled NUMLOCK, I’d get a visual notification on the screen that this had happened.

Unfortunately, I eventually found that this feature interacted poorly with some remote desktop and VM software, causing the notification to flicker on and off distractingly. Occasionally this happened to me when I was trying to run a presentation on my laptop. Nasty.

Today I spent the effort to find out how to fix the problem. It turns out that this clever little feature is an undocumented feature of the Dell bluetooth driver’s tray application. You can eliminate it by killing the tray icon application BTTRAY.EXE. Disabling the icon in the app’s settings does not do the trick, you must kill the app. There also is no configurable setting in the app to disable the NUMLOCK visual notification.

Luckily, there is a registry setting that can disable the feature while allowing you to use your bluetooth device fully.

[HKEY_LOCAL_MACHINESOFTWAREWidcommBTConfigGeneral] 
“KeyIndication”=dword:00000000 

After changing this key, restarting your computer or 

C:Program FilesWIDCOMMBluetooth SoftwareBTTray.exe

will disable the popups.

If this does not work for you, look into ‘quickset’, as others have reported that it also can cause this issue.

The Eleventh Fallacy of Enterprise Computing

I’m collecting some posts here that have been lost in the history of the internet. I’ve collected them from the Wayback Maching (archive.org) HereHere, and Here.

 

 

MONDAY 17 MAY 2004
The 11th Fallacy of Enterprise Computing 

As many of you know, I’ve leveraged and extended “The Eight Fallacies of Distributed Computing” originally created by Peter Deutsch (and extended by James Gosling) to add two more and call them “The Ten Fallacies of Enterprise Computing” for the Effective Enterprise Java book. At the Reston, VA No Fluff Just Stuff Symposium, though, an attendee suggested, in response to an answer I gave, that perhaps I was missing one more, the 11th Fallacy:

11. Business logic can and should be centralized.

The reason this is a fallacy is because the term “business logic” is way too nebulous to nail down correctly, and because business logic tends to stretch out across client-, middle- and server- tiers, as well as across the presentation and data access/storage layers.

This is a hard one to swallow, I’ll grant. Consider, for a moment, a simple business rule: a given person’s name can be no longer than 40 characters. It’s a fairly simple rule, and as such should have a fairly simple answer to the question: Where do we enforce this particular rule? Obviously we have a database schema behind the scenes where the data will be stored, and while we could use tables with every column set to be variable-length strings of up to 2000 characters or so (to allow for maximum flexibility in our storage), most developers choose not to. They’ll cite a whole number of different reasons, but the most obvious one is also the most important–by using relational database constraints, the database can act as an automatic enforcer of business rules, such as the one that requires that names be no longer than 40 characters. Any violation of that rule will result in an error from the database.

Right here, right now, we have a violation of the “centralized business logic” rule. Even if the length of a person’s name isn’t what you consider a business rule, what about the rule stating that a person can have zero to one spouses as part of a family unit? That’s obviously a more complicated rule, and usually results in a foreign key constraint on the database in turn. Another business rule enforced within the database.

Perhaps the rules simply need to stay out of the presentation layer, then. But even here we run into problems–how many of you have used a website application where all validation of form data entry happens on the server (instead of in the browser using script), usually one field at a time? This is the main drawback of enforcing presentation-related business rules at the middle- or server-tiers, in that it requires round trips back and forth to carry out. This hurts both performance and scalability of the system over time, yielding a poorer system as a result.

So where, exactly, did we get this fallacy in the first place? We get it from the old-style client/server applications and systems, where all the rules were sort of jumbled together, typically in the code that ran on the client tier. Then, when business logic code needed to change, it required a complete redeploy of the client-side application that ended up costing a fortune in both time and energy, assuming the change could even be done at all–the worst part was when certain elements of code were replicated multiple times all over the system. Changing one meant having to hunt down every place else a particular rule was–or worse, wasn’t–being implemented.

This isn’t to say that trying to make business logic maintainable over time isn’t a good idea–far from it. But much of the driving force behind “centralize your business logic” was really a shrouded cry for “The Once and Only Once Rule” or the “Don’t Repeat Yourself” principle. The problem is that we just lost sight of the forest for the trees, and ended up trying to obey the letter of the law, rather than its spirit and intentions.

Now, the question remains, is this a fallacy of all enterprise systems, worthy of inclusion in the fallacies list? Or is this just a fragment of something more? Much as I hate to admit it, I’m leaning towards the idea that it’s worthy of inclusion (which means Addison-Wesley is going to kill me for trying to make a change this late in the game).

TUESDAY 18 MAY 2004
More on the 11th Fallacy 

A couple people have commented on the 11th Fallacy, so I figured I’d respond with further thoughts.

First, Andres Aguiar wrote:

The only way to centralize it is to have the business logic in metadata, or at least the business logic that needs to be defined in multiple layers. You can then interpret the metadata (hard) or do code-generation with it (easy).

Not necessarily, although certainly metadata helps with the sample situation I spun out (that of field lengths and such). What do you do for rules along the lines of “Person.spouse must be in the database and must have Person.gender != this.gender” (that’s a required rule, if our current administration has its way, anyway….)? And this only brings up the simple rules–the more complicated ones simply defy OCL (or any equivalent) or any other imperative language’s ability to centrally define. A declarative language (like OCL) definitely helps here, but it’s still not the silver bullet.

 

Then, LCS wrote:

Your problem is mixing up logic definition and interpretation. There is no reason why you can’t have a central defintion of business logic, but have it interpreted on different tiers.

Remember to differentiate tiers from layers–depending on what you’re saying here, LCS, I’m either agreeing with you or disagreeing with you, depending on the details of your suggestion. 🙂 Unfortunately, however, I don’t think we’re anywhere close to an ability to define the business rules in one place and have those rules “scatter” throughout the system in a meaningful way–even if the tools existed, they’re pretty easily defeated as soon as you have to interoperate with somebody else’s code and/or programs and/or schemas.

 

Finally, Giorgio Valoti wrote:

Exactly. The “Once and only once” rule should be valid for the business rules definition, but not for their execution. Indeed, the 40 char max length rule will be probably checked at least twice: from a browser script and from the relational database.

Actually, three times–remember, you can’t always trust that the browser actually did what it was supposed to, so you’ll have to check it when the input comes through the HTTP pipe on the web server tier, as well. (Attackers are always able to fudge up an HTTP packet with Telnet.) But you’re effectively violating the OAOO or DRY rule by validating in three places–or else you’ve somehow managed to create something like what LCS is referring to above, somehow centralizing the rules definition yet somehow spreading it out into the code (and into at least three different forms, too: script, servlet/ASP.NET code, and database schema). I’ve not yet seen the tool that could do that.

 

Frankly, though, even if said tool did exist, again, we’re defeated by the interop scenarios that are going to become more prevalent as time goes on. More importantly, I’ve never yet run across a situation where all business rules were (or even could be) centralized across the enterprise; across a single stovepipe system, yes, but not across the enterprise or even any meaningful number of systems, for that matter. I’m becoming more and more firmly convinced that this is a legitimate fallacy.

 

Craig responds to the 11th Fallacy 

Craig McClanahan responds:

I believe that it is too strong to call this issue a “fallacy” … but that is mainly because it illustrates a very common trend today of thinking in black and white terms, when the real world has an infinite number of shades of grey :-). The argument to centralize business rules is based in a pragmatic reality — the cost of changing a rule is directly dependent on how many places it is enforced. If a metadata type solution helps you centralize even 50% of your business rules, it is VERY worthwhile, even if it doesn’t deal with the obvious exceptions (“is this credit card number valid” cannot be easily checked by JavaScript running in a client browser). The fact that you can’t centralize *all* business rules is simply reality; it does not invalidate the idea that centralizing where you can reduces maintenance costs, and is therefore useful on its own merits. One of the most popular features of Struts, for example, is the Validator Framework … precisely *because* it allows you to centralize the maintenance of a certain class of business rules (the “40 character name” rule, for example) in one place instead of two. The fact that you need server side validation of this rule (either in the application or the database or both) is obvious. The fact that enforcing the constraint on the client side improves the usability of your application (because the user doesn’t have to wait for the round trip to the server to find out they violated a rule) makes it well worth using metadata to enforce the rule in two different tiers is very much worthwhile, even if it doesn’t cover all possible use cases. Calling this a fallacy, simply because you can’t use it 100% of the time, is somewhere between idealistic and naive.

I disagree, Craig–if you look at the other fallacies on the list (including the original 8), they all basically point out that we can’t make the assumption that something will be true 100% of the time: “The network is reliable”, “The network is secure”, and so on. Frankly, the network IS reliable, most of the time, and frequently there IS just one administrator, and so on. It’s a fallacy to assume that these truths will ALWAYS hold, though, and so we need to code defensively around the idea that they won’t be there. Remember “Deutsch’s Rule” regarding the fallacies: “Essentially everyone, when they first build a distributed application, makes the following N assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences” (My italics). Such is the same for this one: how many developers haven’t tried to centralize ALL business rules/logic, and discovered that the system proves to be too painful to use and/or maintain as a result? Because we can’t centralize all business logic, we need to design and code around the idea that it won’t be centralized. Or, loosely translated, we shouldn’t try to force-fit logic into the central slot of the system (I hate to use either “tier” or “layer” here, since either or both could very well apply) when it doesn’t make sense to do so.

 

Does the fallacy imply that we shouldn’t look for ways to beat it? Absolutely. Does that mean we’ll find that way? Probably not–there’s just too much stuff that would need to be done to make it doable.