A short brain dump on automated testing

Sometimes you come across a blogger who’s so thoroughly covered a topic that you can just drill down through years of his writing and keep finding awesome stuff. Alister Scott‘s blog “watirmellon” is such a source. Here’s some highlights…

Make sure to read through his Automated Testing slide deck!

http://watirmelon.com/2012/01/31/introducing-the-software-testing-ice-cream-cone/

“I propose we rename QA to mean Quality Advocate … Whilst their responsibilities include testing, they aren’t limited to just that. They work closely with other team members to build quality in, whether that be though clear, consistent acceptance criteria, ensuring adequate automated test coverage at the unit/integration level, asking for dev box walkthrough, or encouraging collaboration/discussion about doing better testing within the team.

http://watirmelon.com/2013/02/25/are-software-testers-the-gatekeepers-or-guardians-of-quality/

A:  User stories aren’t ‘done’ until you’ve tested each of them, which means you get to provide information to the Product Owner about each of them. You define the quality bar and you work closely with your team and product owner to strive for it.

B: Whilst you think you may define the quality of the system, it’s actually the development team as a whole that does that. Everyone is under pressure to deliver and if you act like an unreasonable gatekeeper of quality, you’ll quickly gain enemies or have people simply go around or above you.

http://watirmelon.com/2013/05/08/should-you-use-the-givenwhenthen-format-to-specify-automated-acceptance-tests/

A: Given/When/Then format provides a high level domain specific language to specify the intention of automated acceptance tests (very easily transferred from a user story) separate to the implementation of your automated acceptance tests. This separation allows changing the test from testing the UI to testing an API without changing the intention of the test.

B:  One of the selling points of writing Given/When/Then tests is that they are readable by business. But in reality, business never read your Given/When/Then specifications, so it makes no sense to invest in them.

“Quick wins give you breathing space to develop a good solution.”

http://watirmelon.com/2009/10/01/not-ruining-your-test-automation-strategy/

Alister: Automated testing through non-GUI means is smart, but sometimes you have no choice. I have made automated testing through the GUI reliable and maintainable, but it required skill on my part. Automated GUI tests can be used to deliberately show discrepancies in the GUI, often highlighting unintended GUI changes. It’s generally not a good idea to completely write something off because you may have seen it done poorly yourself. It’s like saying Agile is wrong because you worked somewhere where Agile was done poorly.

Bob:  My beef is not with GUI testing tools per se. Rather it is with teams that test their entire app through the GUI. You are correct in that sometimes you have no choice. In such cases very careful test construction can mitigate the fragility problem. But no amount of care can come close to competing with an approach that runs the majority of tests through the API.

QA? Project Management? …or just Paradevs?

http://watirmelon.com/2013/01/31/so-what-exactly-is-a-paradev/

A couple of years ago now, just after I started at ThoughtWorks, I read a tweet from a fellow ThoughtWorks developer here in Brisbane along the lines of “the paradevs at work enjoyed my lunchtime session on networking”. My ears pricked: “what’s a paradev?” I asked. “It’s someone who helps the developers develop” she replied. “Oh” I thought.

http://watirmelon.com/2013/05/07/do-you-even-need-a-software-tester-on-your-agile-team/

A: If you don’t particularly care about quality, have good production monitoring, and can get internal engineers and major partners to do your QA then you may get away with not having a tester on your agile team.

B: Software testers provide a unique questioning perspective which is critical to finding problems before go-live. Even with solid automated testing in place: nothing can replicate the human eye and human judgement.

http://watirmelon.com/2013/04/14/which-is-better-manual-or-automated-testing/

A:  Even when automating a test scenario, you have to manually test it at least once anyway to automate it, so automated testing can’t be done without manual testing.

B: Because the automated tests are explicit, they also execute consistently as they don’t get tired and/or lazy like us humans.  Automated tests also allow you to test things that aren’t manually possible, for example, ‘what if I processed ten transactions simultaneously’.

http://watirmelon.com/2013/04/14/test-in-production/

The key to testing changes as soon as they hit production is to have real time, continuous real user experience monitoring.  More comprehensive automated acceptance tests can be written in a non-destructive style that means they can be run in production. This means that these can be run immediately following a fresh production deployment, and as feedback about the tests is received, any issues can be remedied immediately into production and tested again.

http://watirmelon.com/2013/04/13/choosing-a-language-for-your-automated-acceptance-tests/

A: Automated acceptance tests shouldn’t be developed in isolation, so having these written in the same language as your application (usually C# or Java) will ensure that the programmers are fully engaged and will maximize the liklihood of having these tests maintained alongside your application code.

B: If your software testers are responsible for writing and maintaining your automated acceptance tests then it makes sense to allow the testers to write in dynamic scripting languages which are popular with testers as they are lightweight to install and easy to learn, have no licensing costs allowing an unlimited number of build agents to run these tests as part of continuous integration. As testers develop their skills in these languages they can quickly create throwaway scripts to perform repetitive setup tasks required for their story or exploratory testing: such as creating multiple records or rebuilding a database.

http://watirmelon.com/2013/04/13/who-should-write-your-automated-acceptance-tests/

A: The benefits of having the programmers in your team writing and maintaining these tests is that they will be maintained and executed as soon as any change occurs, so they’ll be kept more up to date and less likely to go stale.

B: Software testers are particularly good at building automated acceptance tests that cover an end-to-end process in the system; often called user journeys. This is because they have a good understanding of the journey whereas a programmer may only understand the logic behind a particular screen. Testers should be involved in writing this style of  acceptance tests so they are representative of real usage.

http://watirmelon.com/2013/03/10/is-test-management-wrong/

Now, each agile team is responsible for its own quality, the tester advocates quality as accurate acceptance criteria, unit testing, automated acceptance testing, story testing and exploratory testing. These activities aren’t managed in a test management tool, but against each user story in a lightweight story management tool (such as Trello or Mingle). The tester is responsible for managing his/her own testing. Step by Step test cases (such as those in Quality Center) are no longer needed as each user story has acceptance criteria, and each team writes automated acceptance tests written for functionality they develop which acts as both automated regression tests and living documentation.

http://watirmelon.com/2013/05/20/do-you-need-an-automated-acceptance-testing-framework/

A: If you’re starting off with automated acceptance testing and you don’t have some kind of framework, eg, page object models, in place then you can quickly develop a mess.

B: Over-engineered automated acceptance test frameworks are harmful for a team as they dictate certain ways of doing things which means the team can be less efficient in developing what they need to deliver.

http://pragprog.com/book/achbd/the-rspec-book

Caution! You’ve fallen for a trap. You’ve picked up this book thinking it was about RSpec. Fortunately, you decided to read the foreword. Good! That gives me the opportunity to tell you about the mistake you just made and possibly save you from an unexpected fate. -Uncle Bob

http://watirmelon.com/2011/05/31/an-automated-testing-journey/

MVC Route Testing Boilerplate with JustMock

Back in the day, Phil Haack wrote up a good guide for unit testing the routes created in ASP.NET MVC. I just set up these tests in a new MVC 4 project with JustMock as my mocking framework, so I wanted to put up my small modifications of his code to work there.

First we have a few helper methods that mock up an HttpContextBase and allow the routes to be rendered into RouteData.

[code language=”csharp”]
public static void AssertRoute(RouteCollection routes, string url,
Dictionary<string, string> expectations)
{
var httpContextMock = Mock.Create<HttpContextBase>();
Mock.Arrange(() => httpContextMock.Request.AppRelativeCurrentExecutionFilePath)
.Returns(url);

RouteData routeData = routes.GetRouteData(httpContextMock);
Assert.IsNotNull(routeData, "Should have found the route");

foreach (string property in expectations.Keys)
{
Assert.IsTrue(string.Equals(expectations[property],
routeData.Values[property].ToString(),
StringComparison.OrdinalIgnoreCase)
, string.Format("Expected ‘{0}’, not ‘{1}’ for ‘{2}’.",
expectations[property], routeData.Values[property].ToString(), property));
}
}

public static void AssertIgnoreRoute(RouteCollection routes, string url)
{
var httpContextMock = Mock.Create<HttpContextBase>();
Mock.Arrange(() => httpContextMock.Request.AppRelativeCurrentExecutionFilePath)
.Returns(url);

RouteData routeData = routes.GetRouteData(httpContextMock);
Assert.IsNotNull(routeData, "Should have found the route");
Assert.IsInstanceOf<StopRoutingHandler>(routeData.RouteHandler);
}
[/code]

Dests for the default route and the basic controller/action route.
[code language=”csharp”]
[Test]
public void RegisterRoutes_AddsDefaultRoute()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
var expectations = new Dictionary<string, string>();
expectations.Add("controller", "home");
expectations.Add("action", "index");
expectations.Add("id", "");
AssertRoute(collection, "~/", expectations);
}

[Test]
public void RegisterRoutes_AddsControllerActionIdRoute()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
var expectations = new Dictionary<string, string>();
expectations.Add("controller", "home");
expectations.Add("action", "index");
expectations.Add("id", "1");
AssertRoute(collection, "~/Home/Index/1", expectations);
}
[/code]

…and an easy test to make sure that axd handlers are not routed with the routing engine.
[code language=”csharp”]
[Test]
public void RegisterRoutes_IgnoresAxd()
{
var collection = new RouteCollection();
RouteConfig.RegisterRoutes(collection);
AssertIgnoreRoute(collection, "handler.axd/somestuffhere");
}
[/code]

Curation

You may not know that I have a couple of iPhone apps which I’ve submitted to the Apple store, and which have not been approved because of a lack of polish or focused value to the people who would buy them. It’s a pain for me, but in the end I have to be thankful that Apple takes an interest in the quality of what’s on the store.

I was just browsing the Windows 8 App Store and found two “Top Paid” apps, one called “Word++” and one called “Windows Media Player 9”, neither one from Microsoft, but each looking as close as they can to being an actual Microsoft app. I’m not impressed that Microsoft can’t keep that kind of fraud-ware out of their store.

Chrome Bad!

Chrome Bad!

I’m so sad that I’m increasingly of the opinion that Google does not have my interests at heart. Really I don’t mind if they don’t care about *me*…it’s that they are losing the values that made them great to start with.

The End of the Password

I seriously cannot wait for us to be done with the password! The idea of a human-remembered secret to protect our access hasn’t really been a safe or secure one since people started plugging phones into computers. Hopefully we’re starting to see some action on this front, with Michael Barret (CISO of Paypal) starting an alliance to “obliterate user IDs and passwords and PINs from the face of the planet.

The FIDO Alliance seems to be interested in taking a set of biometrics, USB storage, and TPM embedded hardware and using it to provide secure authentication across the web. Certainly this is an idea who’s time is nearly here, with easy to use services providing open two-factor authentication for applications, and the advent of identity federation services.

We also need it very badly, a large proportion of the high-profile security breaches reported on in the press both are caused by and result in password disclosure. Disclosed passwords, even the ones stored in one-way-hashes are getting easier to decode (brute-force). It’s also easy to ‘social-engineer’ your way into someones passworded accounts and completely derail their life. The current best practices for password management systems were defined in 1985, and are still implemented poorly and incompletely, we can do better. Passwords also create a responsibility on engineering groups to store them securely so that a compromised password on one system doesn’t lead to many compromised systems (algorithms like scrypt, bcrypt and PBKDF2 with high iteration counts can do the trick [1] [2]).

But even with the best password authentication system we can design, we are still stuck with a link between the keyboard and the user’s memory as the essential component of assuring who’s trying to gain access. Passwords should be complex, unique, and hard to lose. This is not a job for a person’s scattered memory, and the combination of better identity tools, including biometrics and mobile devices can bring us beyond the idea of ‘accounts’ with ‘usernames’ and ‘passwords’ and instead to a more serious idea of identity.

Where I go for my tech and development news fix

A friend recently asked me what blogs to follow for learning more about software engineering, and I gave him this list. I thought I’d share it here.

Udi Dahan – The Software Simplist – Udi is one of the best people writing on the subject of large system architecture in the enterprise. I get a lot of value just trying to understand the words he uses, let alone his ideas.

Ayende @ Rahien – Ayende Rahien aka Oren Eini is a fantastic coder, responsible for NHibernate, Rhino Mocks, Entity Framework Profiler, and RavenDb. His daily posts follow the things he’s learning and working on as well as broader insights into coding in the .NET world.

Scott Hanselman – Hillsboro resident Scott Hanselman is one of the celebrities of the .NET world. Currently he works in the ASP.NET/Azure team at Microsoft and constantly works to Open Source the frameworks he works on.

Alvin Ashcraft’s Morning Dew – This is my ‘go to’ resource for everything else that happens in .NET land. Alvin collects the best blog posts of the day and provides you a quick list of things to look at. Much better than subscribing to dozens of blogs.

Knock Me Out – Ryan Neimeyer writes about the various ways to effectively use Knockout.js in your projects, how to solve stick problems, and improve performance of your pages.

Steven Sanderson’s blog  – Author of one of the better books on ASP.NET MVC, as well as the Knockout.js library, Sanderson provides insights on web tech.

Techmeme – News of the technology world, sorted and grouped by lead story. Find the best article on the news of the day without hitting all the news sites.

Hacker News – Links and discussion from the world of venture-funded software startups.

C# on Reddit – Discussion on the C# world.

Krebs on Security – Automated network hacking devices, zero day exploits, and ATM skimmers, hot security stories from a information security researcher.

Seth’s Blog – Wisdom and insight from one of the wizards in the white hat marketing world. Learn to be a better person and a better contributor in your work.

Schneier on Security – The Chuck Norris of information security. Broad insights into the philosophy and future of secure systems.

Entity Framework Migrations and Database Initialization vs. MiniProfiler

The Problem:

If MiniProfiler is initialized before our Entity Framework database initialization strategies execute, the initialization fails with an error about a missing migration table.

If the Entity Framework database initialization strategies execute first, access to entities fails with a type casting exception as the MiniProfiler DbConnection is tried to be forced into a SqlConnection variable (in an internal generic).

The Cause:

When MiniProfiler initializes, it uses reflection to retrieve a collection of database providers from a private static field in System.Data.Common.DbProviderFactories. It then rewrites this list with MiniProfiler shim providers to replace the native providers. This allows MiniProfiler to intercept any calls to the database silently.

When Entity Framework initializes, it starts to compile the data models and create cached initialized databases stored in System.Data.Entity.Internal.LazyInternalContext inside some private static fields. Once these are created, queries against the DbContext use the cached models and databases which are internally typed to use the providers that existed at initialization time.

When the Entity Framework database initialization strategy runs, it needs access to the bare, native Sql provider, not the MiniProfiler shim, in order to correctly generate the SQL to create tables. But once these calls to the native provider are made, the native provider is cached into LazyInternalContext and we can no longer inject the MiniProfiler shims without runtime failures.

My Solution:

Access the private collections inside System.Data.Entity.Internal.LazyInternalContext and clear out the cached compiled models and initialized databases.

If I perform this purge between the operation of the EF database initialization strategies and the initialization of MiniProfiler, the MiniProfiler shims can then be inserted without causing later runtime failures.

Code:

This code did the trick for me:
[code language=”csharp”]Type type = typeof(DbContext).Assembly.GetType("System.Data.Entity.Internal.LazyInternalContext");bject concurrentDictionary = (type.GetField("InitializedDatabases", BindingFlags.NonPublic | BindingFlags.Static)).GetValue(null);
var initializedDatabaseCache = (IDictionary)concurrentDictionary;
if (initializedDatabaseCache != null) initializedDatabaseCache.Clear();
object concurrentDictionary2 = (type.GetField("CachedModels", BindingFlags.NonPublic | BindingFlags.Static)).GetValue(null);
var modelsCache = (IDictionary)concurrentDictionary2;
if (modelsCache != null) modelsCache.Clear();[/code]

Warning:

It appears that the names of the internal fields in LazyInternalContext change between versions of EF, so you may need to modify this code to work with the exact version of EF that you include in your project.

Ridiculous Cellular Internet Directionality

I’ve taken to testing my cellular phone in a 360 degree rotation when I camp out at a restaurant with my laptop. I’ve discovered that I can get a 4x speed improvement by pointing it the right way.

The punchline is that the best upstream and best downstream bandwidth seems come from different directions.

Update: Whether the phone is face down or face up also affects the result. Ping times also vary a lot by directionality. I had best results with the phone face up, and the bottom pointed in the direction of the least signal blockage (my guess).

Disabling Dell Laptop ‘NUM LOCK: ON’ and ‘NUM LOCK: OFF’ messages

I have a nice Dell Precision desktop replacement laptop, and a while back, I noticed that it had a cool software driver feature. When I enabled or disabled NUMLOCK, I’d get a visual notification on the screen that this had happened.

Unfortunately, I eventually found that this feature interacted poorly with some remote desktop and VM software, causing the notification to flicker on and off distractingly. Occasionally this happened to me when I was trying to run a presentation on my laptop. Nasty.

Today I spent the effort to find out how to fix the problem. It turns out that this clever little feature is an undocumented feature of the Dell bluetooth driver’s tray application. You can eliminate it by killing the tray icon application BTTRAY.EXE. Disabling the icon in the app’s settings does not do the trick, you must kill the app. There also is no configurable setting in the app to disable the NUMLOCK visual notification.

Luckily, there is a registry setting that can disable the feature while allowing you to use your bluetooth device fully.

[HKEY_LOCAL_MACHINESOFTWAREWidcommBTConfigGeneral] 
“KeyIndication”=dword:00000000 

After changing this key, restarting your computer or 

C:Program FilesWIDCOMMBluetooth SoftwareBTTray.exe

will disable the popups.

If this does not work for you, look into ‘quickset’, as others have reported that it also can cause this issue.