Try making a private static method in Ruby and using it from an instance of the same class… wihout goofy stuff like send(). As far as I can tell, it’s impossible.
Category: Uncategorized
RSpec mock.with(Hash) expects an Array
The following problem, where RSpec would erroneously fail mock parameter checks in specs, has been fixed in either RSpec 2.13.0 or 2.13.1
RSpec was misinterpreting hash argument expections as arrays. If your expectation was “mock.with(a: 1, b: 2)”, you’d get:
RSpec::Mocks::MockExpectationError: <MyClass (class)> received :my_method with unexpected arguments expected: ({:a => 1, :b => 2}) got: ([:a, 1], [:b, 2])
If you tried wrapping your arguments in a hash “mock.with({a: 1, b: 2})” you’d get the very confusing:
expected: ([:a, 1], [:b, 2]) got: ([:a, 1], [:b, 2])
Luckly, as mentioned, this is fixed in the latest version. Update if possible!
Scaleable (threadsafe) Accumulators in JDK8
Found this on a social site today. Looks like JDK8 is getting some pretty cool-looking accumulators that are threadsafe and performant: http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/f21a3c273fb2
Reducers a la Rich Hickley in Ruby
This is a draft of an article which translates Rich Hickley’s “reducers” concept into the Ruby language.
http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html
collect, aka map, does:
1. Recursion (apply function to head, then call map recursively on the tail)
2. Consumes an ordered list (guarantees order of execution)
3. Produces an ordered list
reduce, as currently provided in Clojure, is ignorant of the collection structure & allows the collection to implement the coll-reduce protocol. Also, it can produce anything: an aggregate, a map, etc. Still inherently sequential (monotonic) in the order of the operations (left fold with seed).
Don’t create a new collection when we call map or filter… don’t add your result to the collection inside the reduce…
How to produce a result? A Reduction Transformer.
We want to be able to create a recipe for creating pies, maybe even before we have a bag of apples. This implies being able to compose multiple reductions. Composing might look something like: map(take_sticker_off_apple).map(reject_bad_apples)
reducing function: function you pass to reduce… a binary operator, such as addition. Arguments don’t have to be symmetric: order is important. First arg is seed/result/accumulator… second arg is input to process that’s building the result.
mapping a function: you have a reducing function
//f1 is the reducing function (a binary operator, etc. see above) // f(input) is "mess with that input" mapping(f) { // this is a reduction transformer ->(f1) { ->(result, input) { f1.call(result, f.call(input) } } do_stuff_to_input(input) { input.label = nil return input } filter_mapping = mapping(do_stuff_to_input) #we have bound f reducing_function(result, input) { return result + input } modified_reducing_function = filter_mapping.call(reduction_transformer) #we have bound f1 bag_of_apples.reduce(modified_reducing_function)
Mapping case: 1->1 case
Filtering case: 1->1/0
filtering(predicate_fn) { // this is a reduction transformer ->(reducing_fn) { ->(result, input) { if(predicate_fn.call(input)) { return reducing_fn.call(result, input) else return result } } } } bag_of_apples.reduce( filtering(is_good_apple?) //predicate_fn is bound .call(array_concat) //reducing_fn is bound )
takes a reducing function, returns one, and if predicate is true call reducing function on the result with the new input. What if predicate false? Just don’t touch the result. This is the right way to write filter.
One to many case: 1->M eg. mapcat
mapcatting(f) { ->(reducing_fn) { ->(result, input) { reduce(reducing_fn, result, f.call(input)) } } } bag_of_apples.reduce( mapcatting(slice_apple_into_pieces(5)).call(array_concat) )
There’s a lot of repeated stuff… we can make a macro to reduce it down to the minimum.
It’s also not pretty. Function that returns a function, for example.
Also, this is not the cake.
TO BE CONTINUED
Query languages in JSON? Wtf
MongoDB. Elasticsearch.
Custom query languages in JSON. Why?
This guy sums it up pretty well: http://smsohan.com/blog/2013/01/17/abusing-json/
Am I gonna have to embed a Coffeescript compiler in my program so that I can at least write Mongo queries that don’t make my eyes bleed?
Determine century of two digit year based on proximity to another year
When writing software, it’s sometimes necessary to determine the value of a date which unfortunately uses only two digits to represent the year. Usually, it’s assumed that the year is somewhere in a specific 100-year range in order to use a cut-off date to decide whether the year is in the 1900’s or the 2000’s.
For example, from a stack overflow answer:
year4 = if year < cutoff "20" + year else "19" + year [/ruby] For example, 30 could be used as the cutoff value. Obviously, this assumes that you will never encounter the value "30" and need to interpret it as "2030" instead of "1930". If, on the other hand, you have another date available which is likely to be quite close to the date with the two-digit year (a reference date), it is possible to choose the century such that the date is closest to the reference date. The following code accomplishes the nearly the same task by determining the century of a two-digit year based on proximity to a reference year (not a date). [ruby] # Given a two-digit year, this method determines the absolute year (ie. the century) based on # proximity to a reference year. def interpret_two_digit_year(ref_year, two_digit_year) unless (0..99).include?(two_digit_year) raise "Invalid argument: year must be in range 0..99, year was #{two_digit_year}" end ref_nearest_century = ref_year.round(-2) ref_century_offset = ref_nearest_century - ref_year year_offset_from_ref = (((two_digit_year + ref_century_offset) + 50) % 100) - 50 ref_year + year_offset_from_ref end [/ruby] The approach is as follows:
- Translate the year into a new coordinate system where it is relative to the first year of the century nearest the reference year, instead of being relative to 0. The year will thereby be in the range (-50..149), relative to the reference year.
- Next, we will map the year to the range -50..50 relative to the reference year by first translating by +50 to bring the range to 0..199, modding by 100 to wrap the 100..199 back into the 0..100 range, and then reverting the initial translation by subtracting 50 to bring the range to the desired -50..50 range
- Lastly, add the resulting offset to the reference year to obtain the final result.
While the best approach is to cease using two-digit years entirely, this approach given above may be helpful in situations where two-digit years are unavoidable, and other nearby dates are available.
Seq vs Seek
In Clojure, something that is “seq”-able can be turned into an object (a seq) that implements an interface that can provide the first item of an ordered list, aka the head, and the rest of the items, aka the tail. As usual in Clojure, seqs and their methods provide various guarantees such as immutability.
The unfortunate part about the situation is that “seq” is pronounced “seek”. “Seq-able” sounds like “seekable”. Seekability, however, implies a completely different set of properties than a seq. Something that is seekable typically allows random access (access any member of the collection at a cost of O(1)) for example via multiplication of an index by an object size. The seq interface, meanwhile, is O(n) for random access even if the underlying data structure could support random access.
Thus, if you’re listening to a Clojurian speak, keep in mind that when they say “seek”, they’re really saying “seq”.
Fog::Storage::AWS::Files#length vs. #all.length
Beware thee calling the #length method on a Fog::Storage::AWS::Files object! It seems to include recently destroyed files!
Instead, you should use #all.length
# Includes destroyed files!! Fog::Storage.new(my_connection).directories.get(my_directory).files.length # Better! Fog::Storage.new(my_connection).directories.get(my_directory).files.all.length
ActiveModel JSON serialization
If you have code:
<%= JSON.pretty_generate(JSON.parse(@myobject.as_json)) %>
that results in "can't convert Hash into String"
Or
<%= JSON.pretty_generate(@myobject.as_json) %>
That results in: "undefined method `key?' for #
Try:
<%= JSON.pretty_generate(JSON.parse(@page.to_json)) %>
And if as_json or to_json is not obeying your “only:”, “include:”, or “except:” directives, make sure you include ActiveModel’s JSON if your object is not an ActiveRecord object (or other object that inherits it already).
Along with that, you’ll need to implement an attributes method. As mentioned in ActiveModel::Serialization, “You need to declare some sort of attributes hash which contains the attributes you want to serialize and their current value.” The weird thing is that it doesn’t matter what the values returned in the hash are. It only obeys the keys, calling .send(key) on your object to get the value for each key.
class MyThing include ActiveModel::Serializers::JSON ... # Needed for ActiveModel::Serialization and things that use it (such as JSON) def attributes { 'myattr' => nil, 'anotherattr' => nil } end end
Perforce “Deleted: edits to the file cannot be submitted without re-adding”
I recently could not submit my Perforce changelist because “out of date files must be resolved or reverted.” After being baffled for a few moments (files were up to date, no files needed to be resolved), I also noticed red text in my p4v submit dialog box that said, “Deleted: edits to the file cannot be submitted without re-adding.” A look through my changelist revealed a deleted file that reported its version as #0/2. I reverted the file, and checked in successfully.