Hatena::Grouprubyist

trotrの日記

 | 

2008-01-19

ここの使い方* 14:12

ちょっとした記録はこちらに書いてみよう。

いつもはhttp://d.hatena*/trotr

重複の除去した配列を返す(Array#uniqもどき)* 14:12

require 'set'
xs = [2,1,2,3,7,2,1,7,2,23,4,4,7,5,6,3,2]

def d(seed)
  Set.new(seed).sort {|l, r| seed.index(l) <=> seed.index(r) }
end

def e(seed)
  Set.new(seed).sort_by{ |e| seed.index(e)} 
end

def f(seed)
  result = []
  xs = seed.dup
  until xs.empty?
    x = xs.shift
    result << x
    xs.delete(x)
  end
  result
end
f(xs) # => [2, 1, 3, 7, 23, 4, 5, 6]
xs #=>  # !> useless use of a variable in void context
def g(seed)
  seed.inject([]){ |re,e|re << e unless re.index(e); re}
end
g(xs) # => [2, 1, 3, 7, 23, 4, 5, 6]

require 'benchmark'
Benchmark.bmbm do |x|
  n = 1000
  %w{e f g}.each{ |e| x.report("#{e}:"){ n.times{ send(e,xs)} }}
end

# Rehearsal --------------------------------------
# e:   0.890000   0.150000   1.040000 (  1.041349)
# f:   0.250000   0.010000   0.260000 (  0.252573)
# g:   0.500000   0.090000   0.590000 (  0.598211)
# ----------------------------- total: 1.890000sec

#          user     system      total        real
# e:   0.840000   0.150000   0.990000 (  0.985385)
# f:   0.240000   0.010000   0.250000 (  0.248240)
# g:   0.520000   0.070000   0.590000 (  0.593549)

重複が多ければ多いほどfが有利になる。

 |