Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tests failing when config.order set to random #115

Merged
merged 6 commits into from
Aug 10, 2013
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,6 @@ group :test do
gem 'simplecov-rcov-text'
end
gem "fakefs", :require => "fakefs/safe"
gem 'json'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we don't support ruby 1.8, there's no need to use multi_json. All 1.9 and higher have a json library. I'm okay having a metric_fu json dumper just for encapsulation purposes, but then just calls JSON.dump

  # Encodes a Ruby object as JSON.
  def dump(object, options={})
    JSON.dump(object, options)
  end
  alias encode dump

(Also see intridea/multi_json#113 )

We can either require it in the Loader, or in the individual files that need it. (I prefer the latter, perhaps on activation)

end
gemspec :path => File.expand_path('..', __FILE__)
9 changes: 9 additions & 0 deletions lib/metric_fu.rb
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,13 @@ def self.tasks_load(tasks_relative_path)
end

LOADER.setup

def self.reset
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is a testing concern, only, I'd rather monkey-patch the module in spec/support/metric_monkey.rb or something

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, went back and forth on that myself.

# TODO Don't like how this method needs to know
# all of these class variables that are defined
# in separate classes.
@configuration = nil
@graph = nil
@result = nil
end
end
10 changes: 8 additions & 2 deletions spec/metric_fu/formatter/html_spec.rb
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@
# for some platforms.
@metric_with_graph = MetricFu.configuration.mri? ? :cane : :flay
@metric_without_graph = :hotspots
MetricFu.configuration.configure_metrics.each do |metric|
metric.enabled = true if [@metric_with_graph, @metric_without_graph].include?(metric.name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow, why?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why what?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. get a metric with a graph. if mri, use cane, else flay (why?)
  2. get a metric without a graph. hotspots
  3. enable only those two metrics

But it's sorta hard to read

Couldn't you

def enable_metric_with_graph
  MetricFu.get_metric(:cane).enabled = true
end
def enable_metric_without_graph
  MetricFu.get_metric(:hotspots).enabled = true
end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, hrm, yeah, could do that. That's better.

As for 1 (mri ? cane : flay) - I think I address it somewhat in a comment that isn't quite visible in this diff, but basically, if we configure all the metrics, the specs are SUPER slow. So, I just wanted to test that it works for at least 1 metric with a graph and 1 without. Cane is by far the fastest to run, but it - at least when I implemented that code? - didn't work on all platforms, so then pick the next best option which was flay. Yeah, totally not optimal and confusing. I would much prefer some sort of mock metric.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I never knew why you'd prefer cane over flay, here. (I understood the part about not running all the metrics 🌈 )

end

MetricFu.result.add(@metric_with_graph) # metric w/ graph
MetricFu.result.add(@metric_without_graph) # metric w/out graph
end
Expand Down Expand Up @@ -67,7 +71,8 @@ def directory(name)

context 'when on OS X' do
before do
MetricFu.configuration.stub(:platform).and_return('darwin')
MetricFu.configuration.stub(:osx?).and_return(true)
MetricFu.configuration.stub(:is_cruise_control_rb?).and_return(false)
end

it "can open the results in the browser" do
Expand Down Expand Up @@ -117,7 +122,8 @@ def directory(name)

context 'when on OS X' do
before do
MetricFu.configuration.stub(:platform).and_return('darwin')
MetricFu.configuration.stub(:osx?).and_return(true)
MetricFu.configuration.stub(:is_cruise_control_rb?).and_return(false)
end

it "can open the results in the browser from the custom output directory" do
Expand Down
4 changes: 2 additions & 2 deletions spec/metric_fu/metrics/hotspots/hotspots_spec.rb
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@
hotspots.instance_variable_set(:@analyzer, analyzer)
result = hotspots.analyze
expected = MultiJson.load("{\"methods\":[{\"location\":{\"class_name\":\"Client\",\"method_name\":\"Client#client_requested_sync\",\"file_path\":\"lib/client/client.rb\",\"hash\":7919384682,\"simple_method_name\":\"#client_requested_sync\"},\"details\":{\"reek\":\"found 1 code smells\",\"flog\":\"complexity is 37.9\"}}],\"classes\":[{\"location\":{\"class_name\":\"Client\",\"method_name\":null,\"file_path\":\"lib/client/client.rb\",\"hash\":7995629750},\"details\":{\"reek\":\"found 2 code smells\",\"flog\":\"complexity is 37.9\"}}],\"files\":[{\"location\":{\"class_name\":null,\"method_name\":null,\"file_path\":\"lib/client/client.rb\",\"hash\":-5738801681},\"details\":{\"reek\":\"found 2 code smells\",\"flog\":\"complexity is 37.9\",\"churn\":\"detected high level of churn (changed 54 times)\"}},{\"location\":{\"class_name\":null,\"method_name\":null,\"file_path\":\"lib/client/foo.rb\",\"hash\":-7081271905},\"details\":{\"churn\":\"detected high level of churn (changed 52 times)\"}}]}")
compare_hashes(MultiJson.load(hotspots.to_h[:hotspots].to_json), expected)
compare_hashes(MultiJson.load(MultiJson.dump(hotspots.to_h[:hotspots])), expected)
end

it "should put the changes into a hash" do
Expand All @@ -80,7 +80,7 @@
hotspots.instance_variable_set(:@analyzer, analyzer)
hotspots.analyze
expected = MultiJson.load("{\"methods\":[{\"location\":{\"class_name\":\"Client\",\"method_name\":\"Client#client_requested_sync\",\"file_path\":\"lib/client/client.rb\",\"hash\":7919384682,\"simple_method_name\":\"#client_requested_sync\"},\"details\":{\"reek\":\"found 1 code smells\",\"flog\":\"complexity is 37.9\"}}],\"classes\":[{\"location\":{\"class_name\":\"Client\",\"method_name\":null,\"file_path\":\"lib/client/client.rb\",\"hash\":7995629750},\"details\":{\"reek\":\"found 2 code smells\",\"flog\":\"complexity is 37.9\"}}],\"files\":[{\"location\":{\"class_name\":null,\"method_name\":null,\"file_path\":\"lib/client/client.rb\",\"hash\":-5738801681},\"details\":{\"reek\":\"found 2 code smells\",\"flog\":\"complexity is 37.9\",\"churn\":\"detected high level of churn (changed 54 times)\"}},{\"location\":{\"class_name\":null,\"method_name\":null,\"file_path\":\"lib/client/foo.rb\",\"hash\":-7081271905},\"details\":{\"churn\":\"detected high level of churn (changed 52 times)\"}}]}")
compare_hashes(MultiJson.load(hotspots.to_h[:hotspots].to_json), expected)
compare_hashes(MultiJson.load(MultiJson.dump(hotspots.to_h[:hotspots])), expected)
end
# really testing the output of analyzed_problems#worst_items
it "should return the worst item granularities: files, classes, methods" do
Expand Down
8 changes: 4 additions & 4 deletions spec/run_spec.rb
Original file line number Diff line number Diff line change
Expand Up @@ -161,11 +161,7 @@ def data_directory
MetricFu::Configuration.run do |config|
config.formatters.clear
end

cleanup_fs
end


end

context "given other options" do
Expand All @@ -187,6 +183,10 @@ def data_directory

end

after do
cleanup_fs
end

def metric_fu(options = "--no-open")
MfDebugger::Logger.capture_output {
begin
Expand Down
5 changes: 5 additions & 0 deletions spec/spec_helper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
require 'rspec/autorun'
require 'date'
require 'construct'
require 'json'

# add lib to the load path just like rubygems does
$:.push File.expand_path("../../lib", __FILE__)
Expand All @@ -29,4 +30,8 @@ def mf_log(msg); mf_debug(msg); end
config.after(:suite) do
cleanup_test_files
end

config.after(:each) do
MetricFu.reset
end
end
2 changes: 2 additions & 0 deletions spec/support/suite.rb
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ def setup_fs
MetricFu::Io::FileSystem.stub(:directory).with('base_directory').and_return("tmp/metric_fu/test")
MetricFu::Io::FileSystem.stub(:directory).with('output_directory').and_return("tmp/metric_fu/test/output")
MetricFu::Io::FileSystem.stub(:directory).with('data_directory').and_return("tmp/metric_fu/test/_data")
MetricFu::Io::FileSystem.stub(:directory).with('code_dirs').and_return(%w(lib))
MetricFu::Io::FileSystem.stub(:directory).with('scratch_directory').and_return('tmp/metric_fu/test/scratch')
end
end

Expand Down