Issue: RubyMine 3.1, Webmock-1.6.2 and “Spec.configure” curse

05.26.2011 00:01 by kbeckman | Comments

I ran into a nasty issue today – an issue that I have no problem saying would have been way past my abilities to resolve given my current status as a Ruby Noobie. Thank goodness for team members who are way more knowledgeable than I am! I’ll try to do my best here to set up the environment context that produced the issue and follow the steps we (“we” is used loosely here because I didn’t help much) used to resolve the issue…

 

Overview

I kept getting a error while trying to use RubyMine’s integrated debugger to test some of the changes I made to a Rails controller class. Here’s the full console output from the RubyMine debugger…

 

/Users/kbeckman/.rvm/rubies/ruby-1.9.2-p180/bin/ruby -e $stdout.sync=true;$stderr.sync=true;load($0=ARGV.shift) /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/bin/rspec /Users/kbeckman/<ApplicationDir>/spec/controllers/profile_controller_spec.rb --require teamcity/spec/runner/formatter/teamcity/formatter --format Spec::Runner::Formatter::TeamcityFormatter
Testing started at 9:28 PM ...
/Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/webmock-1.6.2/lib/webmock/rspec.rb:23:in `<top (required)>': undefined method `configure' for Spec::Runner:Module (NoMethodError)
    from /Users/kbeckman/<ApplicationDir>/spec/spec_helper.rb:4:in `require'
    from /Users/kbeckman/<ApplicationDir>/spec/spec_helper.rb:4:in `<top (required)>'
    from /Users/kbeckman/<ApplicationDir>/spec/controllers/profile_controller_spec.rb:1:in `require'
    from /Users/kbeckman/<ApplicationDir>/spec/controllers/profile_controller_spec.rb:1:in `<top (required)>'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `block in load_spec_files'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `map'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load_spec_files'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/command_line.rb:18:in `run'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:55:in `run_in_process'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:46:in `run'
    from /Users/kbeckman/.rvm/gems/ruby-1.9.2-p180/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:10:in `block in autorun'
Empty test suite.

Process finished with exit code 1

 

Investigation

Below is the /webmock-1.6.2/lib/webmock/rspec.rb where the actual exception was being thrown… And here’s why:

a) On line 4, “RSpec” was defined but “RSpec:Expectations” was undefined and the code ran to line 6.

b) On line 6, “Spec” was defined so the contents of the elsif block were executed where some global RSpec variables are set to whatever webmock thinks they should be. This a technique that dynamic languages use to swap out existing functionality for some entirely different implementation. No need for IoC Containers here!

c) The code continues to execute running over the else block (what we really wanted to have execute) and runs to line 23.

d) On line 23, the code blows up because of “undefined method ‘configure’ for Spec::Runner:Module (NoMethodError)”

 

require 'webmock'

# RSpec 1.x and 2.x compatibility
if defined?(RSpec) && defined?(RSpec::Expectations)
  RSPEC_NAMESPACE = RSPEC_CONFIGURER = RSpec
elsif defined?(Spec)
  RSPEC_NAMESPACE = Spec
  RSPEC_CONFIGURER = Spec::Runner
else  
  begin
    require 'rspec/core'
    require 'rspec/expectations'
    RSPEC_NAMESPACE = RSPEC_CONFIGURER = RSpec
  rescue LoadError
    require 'spec'
    RSPEC_NAMESPACE = Spec
    RSPEC_CONFIGURER = Spec::Runner
  end
end

require 'webmock/rspec/matchers'
  
RSPEC_CONFIGURER.configure { |config|

  config.include WebMock::API
  config.include WebMock::Matchers

  config.after(:each) do
    WebMock.reset!
  end
}

WebMock::AssertionFailure.error_class = RSPEC_NAMESPACE::Expectations::ExpectationNotMetError

 

So why did this happen? It turns out that somewhere in RubyMine, a “Spec::Runner” is defined and it doesn’t have the configure() method that we need. If you look at some of the very first heinous highlight above from the stack trace, you’ll notice that RubyMine is using a --require” and “--formatter” command line arguments. The “--require" argument tells Ruby to load whatever is in that directory. I’m not going to go into all of the details, but it has something to do with output formatting for use with JetBrains’ CI server, TeamCity. Whatever is in that directory is causing our problem by defining a “Spec::Runner” before webmock can load a replacement.

 

Resolution

Fixing the issue required a slight modification to the /webmock-1.6.2/lib/webmock/rspec.rb listed above. Here’s our fully modified file. Please note the heinous highlight… We added a check to determine whether or not the “Spec.configure” method was defined since this was the one that was previously throwing the exception. Now the code evaluates “Spec.configure” as undefined and falls through to execute the else block where webmock successfully sets up its requirements using the “Spec” definition from RSpec instead of whatever was in the teamcity/spec/runner/formatter/teamcity/formatter directory. If you’re curious as to what that was in RubyMine’s internals causing the issue, help yourself with the following command. Please note your RubyMine location might differ from mine…

 

cd ~/applications/RubyMine\ 3.1.1.app/rb
grep -R Spec **

 

require 'webmock'

# RSpec 1.x and 2.x compatibility
if defined?(RSpec) && defined?(RSpec::Expectations)
  RSPEC_NAMESPACE = RSPEC_CONFIGURER = RSpec
elsif defined?(Spec) && defined?(Spec.configure)
  RSPEC_NAMESPACE = Spec
  RSPEC_CONFIGURER = Spec::Runner
else  
  begin
    require 'rspec/core'
    require 'rspec/expectations'
    RSPEC_NAMESPACE = RSPEC_CONFIGURER = RSpec
  rescue LoadError
    require 'spec'
    RSPEC_NAMESPACE = Spec
    RSPEC_CONFIGURER = Spec::Runner
  end
end

require 'webmock/rspec/matchers'
  
RSPEC_CONFIGURER.configure { |config|

  config.include WebMock::API
  config.include WebMock::Matchers

  config.after(:each) do
    WebMock.reset!
  end
}

WebMock::AssertionFailure.error_class = RSPEC_NAMESPACE::Expectations::ExpectationNotMetError

 

So, we’re not quite done yet… We’ve fixed the issue by forcing the setup block in the webmock rspec.rb file to execute, but as soon as we run “bundle install” or “gem update webmock” or any of the other countless ways you could probably update webmock’s code we’ll more than likely get the same issue. To make sure that doesn’t happen, we did the following:

a) Copy the entire /webmock-1.6.2 directory our project was originally sourcing the code from into a customization folder we keep per-project. Our new location for webmock is ../<ApplicationDirectory>/vendor/gems/webmock-1.6.2.

b) Edit the project’s gemfile so webmock is sourced from the edited copy rather than a directory that can be easily overwritten/updated. Our gemfile changes for webmock are below… I’ve commented out the old reference for our new location.

c) Make sure you “$ git commit -a” your webmock source directory…

 

...

group :test do
  #gem 'webmock'
  gem 'webmock', :path => "#{File.expand_path(__FILE__)}/../vendor/gems/webmock-1.6.2"
end

...

 

I hope this helps!!!

Choosing [Cross-Platform] Development Tools…

05.25.2011 06:30 by kbeckman | Comments

If you had asked me a few months ago where my allegiances fell in software development, I would have quickly told you (without giving much thought) on the Microsoft stack of technologies. Fast forward a few months and I’m in week 3 of my first consulting gig and the only Microsoft applications I use on a daily basis are Office 2011 for Mac and a instance of Windows Server 2008 R2 running in VMware Fusion to host SQL Server 2008 R2 . Instead of Windows 7, TFS and Visual Studio 2010, my daily development experience includes OSX Snow Leopard, JetBrains RubyMine, Git and a shit-ton of command line utilities.

 

A lot can change in a few months…

 

Working as a consultant, you have to get up and running quickly no matter what you’re working on or what technology stack you are working with. That’s probably one of the primary reasons you were hired in the first place. If you’re slow out of the gate, it could cost your client some serious dollars and cost you some of your hard-earned Street Cred. One thing you absolutely do not want to waste time on is learning new tools - chances are you’ll have enough other things to worry about. So the first lesson I’ve learned while expanding my technology stack for a while is that I need to be more diligent and evaluate cross-platform capabilities while I’m evaluating new software tools.

 

Here’s a short list of the tools I recently said goodbye to in favor of a cross-platform replacement… Thankfully I was familiar with all but one of them from prior projects. They just weren’t my tools of choice until now. I also threw in a couple that were already my primary tools that should be mentioned anyway because of their cross-platform support.

 

DiffMerge over WinMerge

This one hurt the most… I used WinMerge each day and actually made it my diff tool of choice in Visual Studio overriding the crappy default offering. WinMerge and I go way back to all-weekend release branch code merges where Visual Source Safe was the source control provider (and didn’t help you much with the process). I had almost every shortcut key combo memorized and had enhanced the experience considerably through custom configurations over many years of use. I’m really going to miss this one, but DiffMerge is here to stay for a while… DiffMerge runs on everything and I currently have it installed on Windows 7, Windows Server 2008 R2, Ubuntu Desktop and Mac OS X Snow Leopard. Installing on Ubuntu is a little tricky, but it can be done.

 

Someday WinMerge might rise again… There is a team of developers working on WinMerge 3 – a cross platform port of WinMerge using Qt.

 

Git Bash & gitk over GitExtensions

I give GitExtensions full credit as the tool that helped me conquer the Git workflow allowing me to begin using it in my personal projects. GitExtensions is by far the best Windows Git GUI tool that I’ve used… It integrates nicely with Visual Studio and adheres to the Git workflow and repository structure. GitExtensions is exactly what you need if you aren’t quite ready for command line Git. Supposedly there are a few folks out there that can get GitExtensions running on Ubuntu with some additional Mono packages, but I’m not one of them. I gave ‘er hell, but decided it was just better to move to the command line. Heck, half of my development is done from the command line now anyway. What’s one more tool…?

 

Firefox & Chrome over IE

It’s been a long time since IE has been my default browser and most of the time I’ve just kept it around for SharePoint sites and the Outlook Web Access. This one though is a no-brainer… IE9 is nice, fast and the developer tools are good, but there’s no sense spending a lot of time with it if you might find yourself on another development stack. You’ll probably need to keep IE around in a VM if you don’t develop on Windows just to make sure all of your styling and JavaScript functions properly, but I don’t think it’s worth the trouble for anything else.

 

Soon I’m soon going to be saying the same kind of things about Firefox too if they don’t get the JavaScript debugger working in 4.0.x.

I’ll leave Safari support up to the QA team.

 

VirtualBox over Any Other Client Virtualization Software

Virtual Box is easily one of the best free tools out there… On my Windows machines, I still Boot2VHD (and probably always will), but VirtualBox has been my client virtualization utility for quite some time now and I love it… In addition to it’s own VDI virtual disk format, it will run VHD virtual disks so there’s no reason to use Virtual PC (ever again). I’ll use VMware if someone else is paying for it and require me to use it.

A Few Thoughts on Project Estimation…

03.24.2011 22:45 by kbeckman | Comments

Project estimation is one of the most difficult things that falls under the responsibility of  development team leads and project managers. Successful estimation requires mastery of an imperfect science where past performance does not always reflect future results. I’m sure that sounds familiar, especially if you’ve ever met with an investment advisor to plan for your retirement or have done any of your own independent work in investing. The truth is that planning project work and providing accurate completion estimates is every bit as difficult as successful investing.

 

Just like the ever-changing global market, every project comes with a new set of variables that were different than the last. Projects include both familiar and unfamiliar technologies; business requirements with varying levels of completeness; varying priority depending on the targeted audience or business focus; development team members with assorted levels of experience, skillset and domain knowledge; greenfield and brownfield requirements for internal, external or inherited applications; and varying involvement from 3rd Party development teams that might include consultants, corporate partners, offshore teams or all of the above.  Development leads must carefully weigh all project variables and apply experience, discipline and judgment to arrive at estimates that are not only accurate but attainable. Agile project management makes things a bit easier (if done correctly) by providing a framework for allowing teams to plan work in small 2-3 week increments rather than making an attempt to estimate the whole project at once. Agile also provides a means for teams to identify problems and impediments early in the development process and increase their visibility among interested parties hopefully resulting in earlier resolution. Agile helps, but it’s No Silver Bullet.

 

In my experience, the most difficult of all project variables to account for is involvement of 3rd Party development teams or work involving a 3rd Party application purchased by your organization but maintained elsewhere. Over the last several years, I’ve encountered these situations time and time again. And I’ve finally decided that I’ve learned a hard lesson for the last time when attempting to estimate this type of work. Historically my estimates have been entirely too optimistic. In the future, they won’t be… My past experience will lead me to provide more pessimistic estimates and maintain a heightened awareness for uncertainty whenever 3rd Parties are involved.

 

3rd Party Applications

This is probably the most common type 3rd party involvement you’ll encounter throughout your career. These applications exist in an organization because they solve a problem in a very specific problem domain or because your organization lacks the time and resources to build the functionality from scratch. Nevertheless, you will find yourself having to extend these applications or integrate them into your existing enterprise architecture. If you don’t own the source code, it will be almost impossible to debug. Adding ASP.NET Tracing where possible will help and so will Red-Gate’s .NET Reflector. If you or your company can spring for .NET Reflector Professional Edition, it will provide a means for decompiling the assemblies for debugging. Depending on your project, this could be worth every penny…  When estimating work on external applications, DO NOT take the vendor’s word on how easy it is going to be to extend the product or add a plug-in component. Remember that they already sold the application to you or someone else at your organization and will probably try to sell you something else if they can. You’ll probably get a lot of pressure from product owners for a quick turnaround on additional features, but remember the speed of this type of work depends solely on your knowledge and familiarity with the application.  Use your best pessimistic judgment in estimating and evaluate the product as closely as you can before attempting to implement or extend it. If the vendor doesn’t have good documentation or available consulting services to aid in your project, don’t bother until you can get at least one or the other.

 

3rd Party Development Teams

Working with 3rd Party development teams produces its own set of challenges… One major challenge, but one you’ll have the most control over, is communication barriers due to differences in  time zones. The farther away you are from other developers on your project, the harder it will be for effective communication. Skype, SharedView and schedule flexibility among all parties are essential in reducing communication friction. Other issues that will most certainly arise are conflicting views on technology, performance, scalability and application quality. There’s really only one right answer here – whoever is funding the development wins. The entity funding the project has the responsibility of ensuring 3rd Party teams adhere to its rules regarding performance, scalability and quality and not what the 3rd Party teams are historically accustomed to or immediately capable of producing. Anticipate this problem as the length and scope of the project increases. Lastly, I want to comment on an item that should be considered when involving 3rd Party development teams although there may be little or no way of directly controlling it. Differences in compensation and performance incentives depending on the organizations involved will surely result in differing focuses from participating development teams. In a perfect world, all participating parties are incentivized according to the project quality, ability of the team to meet deadlines and overall project outcome.

ASP.NET Authorization Configuration with WIF

02.23.2011 06:45 by kbeckman | Comments

I recently ran into an issue after using Windows Identity Foundation (WIF) to modify an existing ASP.NET application as a Relying Party to use a Secure Token Service (Identity Provider) rather than using traditional ASP.NET Forms Authentication. I began by applying a very aggressive ASP.NET authorization configuration to the entire site. I locked-down the site disallowing any anonymous access using the <authorization> element in the <system.web> section. Not all areas of the site required restricted access so <location> tags were used to allow anonymous access to certain folders and page resources. Everything seemed to be functioning exactly as desired. It worked; It was secure; and it restricted access to secure areas of the application to anyone who hadn’t first authenticated against the Identity Provider. All is well!

 

Then one day a defect crosses my inbox… Our QA analyst was testing the custom errors pages on the site. Without authenticating, she purposefully entered an incorrect URL expecting to see our custom error page for HTTP 404 errors. Instead, she was redirected to our Identity Provider for authentication. After authentication, she was correctly redirected back to the site and to the 404 custom error page… This wasn’t the desired behavior as she shouldn’t have had to authenticate before redirecting to the custom error page.

 

So what’s the deal? The problem was that as a default behavior, the entire site was configured to require WIF authentication. This is also the case for non-existent resources that produce HTTP 404 errors. Following is a sample of the web.config file elements that are of interest here. This is how the site was originally configured.

 

<system.web>
    <authorization>
        <deny users="?"/>
    </authorization>
</system.web>

<location path="SomeAnonymousResource.aspx">
    <system.web>
        <authorization>
            <allow users="*"/>
        </authorization>
    </system.web>
</location>

<location path="SomeAnonymousFolder">
    <system.web>
        <authorization>
            <allow users="*"/>
        </authorization>
    </system.web>
</location>

 

Here’s what the configuration elements look like now… I’ve replaced the aggressive authorization restrictions at the site root with a more targeted approach restricting access only as necessary. Instead of <location> elements to allow anonymous access for resources, there are <location> elements for restricting secure parts of the application. Problem solved – no more redirects for authentication just to end up at the 404 custom error page.

 

<system.web>
    <authorization>
        <allow users="*"/>
    </authorization>
</system.web>

<location path="SomeSecureResource.aspx">
    <system.web>
        <authorization>
            <deny users="?"/>
        </authorization>
    </system.web>
</location>

<location path="SomeSecureFolder">
    <system.web>
        <authorization>
            <deny users="?"/>
        </authorization>
    </system.web>
</location>

Maintenance Automation - Boot2VHD

02.19.2011 08:02 by kbeckman | Comments

Sorry for the repost… For some reason, this post was lost in the latest site upgrade. Since then, I’ve made a few updates.

 

I’m finally getting around to the post I promised a few weeks ago where I mentioned some automation scripts for performing maintenance on your bootable VHDs. These automation scripts come in a 2-part format – maintenance from within the context of your bootable VHD and the maintenance of the VHD file itself. I highly recommend you check out my prior post, Boot2VHD Best Practices, before using these scripts. It will give you a better idea of what operations the scripts perform (and the reasons behind them).

 

Prerequisites

There are just a few prerequisites that you need to make sure you have installed if you want to use these scripts in their entirety. Make sure to install them to the locations mentioned below or you will have additional customization work to do on these scripts…

1)   7-Zip (Host OS) – This is an awesome freeware file compression utility. The scripts use 7-Zip for archiving the VHDs before and after the maintenance cycle. The scripts assume that you are using the 64-bit version of this application and it assumes that 7-Zip is installed at: C:\Program Files\7-Zip.

2)  You need to have a backup location set up with the following file structure. ..VHD Backups\  ..VHD Backups\Old\

image_thumb

3)   SysInternals Suite  (Bootable VHD) – You’ll need to download this application suite and unzip it to C:\Program Files (x86)\Sysinternals Suite. The main application we need for this process is SDelete – it’s used for zeroing-out the free space in your VHD and is required to prep your VHD for the compression process.

4)   Just a note here… This has only been tested with Windows 7 (64-bit), however it should also work with Windows Server 2008R2.

 

Notes and Disclaimer

You’ll definitely have to customize these scripts for your own environment. As long as the prerequisite applications are installed in the locations that I’ve mentioned, you shouldn’t have to change the prep script – Prep Bootable VHD for Maintenance Cycle.bat. I highly recommend backing up your VHDs before running the scripts for the first time. This will ensure that you’re able to rollback if something wasn’t configured properly. Be sure to read the comments included within the scripts. They should give you a good idea of what’s about to happen and how you should configure it for your own environment. Lastly, I want to mention that DiskPart requires that its scripts are in separate files when you’re automating anything. There are two text files included in the Host OS directory that contain the DiskPart tasks.

 

Script 1: Prep Bootable VHD for Maintenance Cycle.bat

Run this script as an administrator while booted into your VHD. This script is located in the Virtual Machine folder in the .zip file and is the first step in the maintenance cycle. This script opens the Disk Cleanup utility allowing you to select the clean up options to perform. Next it runs the defrag utility on your bootable VHD C: drive. Finally, it uses SDelete to zero out the free space on your VHD drive for the compression process later.

 

:: Run the DiskCleanup utility...
cleanmgr.exe /d c:

:: Run disk defrag...
defrag c: /H /U /V
defrag c: /H /X /U /V

cd "c:\program files (x86)\sysinternals suite"

:: Run SDelete to zero-out free space...
sdelete.exe -p 1 -c c:pause

  

Script 2: Run VHD Maintenance Cycle.bat

Run this script as an administrator while booted into your Host OS. This script contains the meat of the automated maintenance cycle and will require some configuration on your part to make sure you’re pointing to the right VHD and backup locations for your environment. The script assumes a single bootable VHD in the management cycle, but you can easily add the extra commands to support additional VHDs. In the section below, I explain every step of the maintenance workflow.

 

1) First the script deletes the old VHD backups in the <backup location>\old directory.

2) Next the script moves the backups from the prior maintenance cycle to the <backup location>\old directory.

 

:: Move backups from prior maintenance cycle to the ..\old directory...
del "d:\vhd backups\old\*.7z"
move "d:\vhd backups\*.7z" "d:\vhd backups\old"

 

3) Next the script uses a DiskPart command script located in the VHD Mgmt - Compact and Merge Disks.txt file to compact your disk. Be sure to edit this text file to contain the proper location of your disk.

  

[parent script]
:: Run Diskpart compact and merge tasks...
diskpart /s "D:\Git\System\Scripts\VHD Maintenance\Host OS\VHD Mgmt - Compact and Merge Disks.txt"


[diskpart script]
select vdisk file="v:\native\developer\ultimatex64.vhd"
compact vdisk

  

4) Finally, the script creates a backup of the VHD in the <backup location> directory.

 

:: Backup the VHDs...
call:ZipFile "d:\vhd backups\Ultimatex64_Development.7z" "v:\native\developer\ultimatex64_development.vhd"


::--------------------------------------------------
:: Creates a 7-Zip .7z archive.
:: Params:    %1 = destination archive
::            %2 = source file
:: http://www.dostips.com/DtTutoFunctions.php#FunctionTutorial.CreatingAFunction
::--------------------------------------------------
:ZipFile
cd "c:\program files\7-zip"
7z a -t7z %1 %2
goto:eof

 

I hope there’s some folks out there that find this script useful… It has saved me a lot of time by not having to babysit the maintenance process. Now all of my VHD maintenance runs after hours and I just check it in the morning. If anything fails, I’ve always got a backup! I’ve included links to two script packages – one that assumes a single bootable VHD (as described in this post) and another that assumes a parent bootable VHD with an associated differencing disk.

C4SC_VHD_Maintenance (Single).zip (3.34 kb)

C4SC_VHD_Maintenance (Differencing).zip (4.00 kb)