A trillion points of altruism to Julia's Gitter/Slack/Discourse chat and its benign and helpful netizens.
- Lost a couple of days notes... grrr. Gotta remember to push after every day.
- Do not try to beat
Parsers
by Jacob Quinn. That memory mapping and dealing with streams by byte arrays is just wickedly fast. You can memory map a file from disk into RAM to make access MUCH faster withMmap.mmap("path/to/file/name")
and then feed thebyte Array
onto a capable function. - Ran into the
CelestialCartographers
andAhorn
creators - a Celeste videogame map making program in Julia. They are really cool, and crushed my code performance because I didn't remember to useArray{Char,1}(undef,n)
. Never again! - The expression
symbol(:123)
is typed asInt
, turns out the parser is not that silly and knows a bit before assigning to symbols. - Remember to use install scripts with
using Pkg; Pkg.add("Foo")
, and not] add Foo
Because terminals break easier than the standard API. ImmutableDict
in Julia 1.5 is faaaast and allows one to use dicts in a performant manner, so long as you do not seek to delete keys (but you can overwrite them).- Starting Julia with
-p
or-t
will give you 1 processp
er core vs-t
threads per core. Base.parse_input_line
exists to parse a Julia expression- Perhaps there is some clever
@ndefs
kinda stuff one can do to speed up competitive programming reading in from file shenanigans... - Z3.jl exists and has amazing bindings to the Z3 solver.
- Ledecka.jl can probably be used (or some other property based testing framework) to add
@pre! issorted
@post! issorted
to functions and compiles without them but generates test suite and helps you track down bugs - Need to get on cracking with Gtk.jl and GraphsIO.jl...
- Write stuff down!
- Prefer
ncodeunits
oversizeof
to avoid UTF32Strings problems - same performance on ASCII characters. (Credit to Colin M. Caine) - You can declare variables as
local
and not assign them. [Exercism Run-Length-Encoding] - Write
Base.:+
so that Julia doesn't think you are overloading broadcasting. - Remember to
throw
yourError
so thatcatch-try
can help handle it.
- Don't use a
@goto
when awhile true
will do! - You can use comprehensions and Dictionaries! (Though they still allocate a bit):
function transform(input::Dict)
Dict(
lowercase(letter) => value
for (value, letters) in input
for letter in lowercase.(letters)
)
end
get
can take a default value, soget(dict,key, 0)
will return 0 if the key is not found!- Rewrite functions as single lines with a parenthesis to make it an expression:
foo(arg) = (arg.name = newname())
. [Exercism Robot-name] - Don't write
x == 1 || x == 0
if you can writex <= 1
. Avoid||
when possible. - Prefer to use
print(io, char)
rather thanwrite(io, char)
because "print goes from a Char to one or more UTF8-bytes, or one or more UTF-16 words", andwrite
does not, says SPJ.write
outpus the binary representation of the char, not the ASCII digits. - Avoid indexing into strings, avoid
ncodeunits
, avoidcodeunits
. There be dragons there... stick withfor ch in str
. Iterators are your friends! Indexing into strings causes too many headaches for people.
- There is no need for an
io = IOBuffer()
if you aren't writing to it constantly. - You can use the
divrem
function to get around remainders and such Exercism Resistors - Be ready to store a
divrem
call intod, r = divrem(x,y)
to save some flops (divisions are costly!) - No need to specify
Int64
whenInt
will do - think of the poor Raspberry Pi's! - To prevent people from using invalid initializations, use inner constructors [Exercism Robot-name]
- A non-mutating version of
String(take!(io))
was given by Ying Bo Ma in #helpdesk:
julia> io = IOBuffer(); println(io, "hello!");
julia> seekstart(io); read(io, String)
"hello!\n"
julia> seekstart(io); read(io, String)
"hello!\n"
- Don't use
string(.., ..., ...)
where a"$..."
will do in areturn
.
- Use functions to access fields of structs in order to have generic codes.
- You can use a
function foo(x) ... end
form in anew
internal constructor!
- Sister's bday. Had a sweet dance party and setup her room in a cool way.
- Rereading Scott's comment on
Clock
in Exercism: Don't overparametrize - (also respecting tabbing sheesh). - Use boolean evaluations in a
( ...)
to save you an assignment if you have to add1 V 0
:
Base.:+(x :: AbstractClock, y :: AbstractClock) = Clock(x.h + y.h + (x.m + y.m >= 60), x.m + y.m)
- STOP. OVER. CONSTRAINING. TYPES. Be the type. Let it duck.
- For string processing with ascii conditionals: Don't do filters first, then a loop. Just check the string in the loop and exit early if conditions don't hold.
- STOP: using this
str = ""; if ... str = str * "foo"
pattern. best to use anio = IOBuffer
,write(io, char)
into the buffer, and thenString(take!(io))
. - Consider also using
string(a, b, c ? x : y, d)
to construct the last string to return the last string. - Overload
sprint
to get custom pretty printing for your types. '0' <= x <= '2'
is better than'1' == x || x == '2'
, as it avoids a branch. Exercism Trinary
length(n)
onString
FROM BASE JULIA or UTF8 is aO(n)
operation wheareassizeof(str)
isO(1)
- Looking up the documentation for
pairs
made me wonder if there isn't aBase
method for counting appearances of elements. I kept digging intoBase.Iterators
and found a whole word to rewrite Exercism with. There wasIterators.accumulate
, which is a lazyforeach
,Iterators.takewhile(pred,collection)
,Iterators.product, Iterators.cycle
and many friends. Must investigate and brush up onIterTools.jl
andTransducers.jl
- Colin M. Caine recommends avoiding a bitsting altogether and using binary literals like
0b01 & 0b01
and such. - YES!!! There exists a
ndigits
function, which gives you the number of digits of the argument. - Damn... you can add
Chars
like in@taigua
's Atbash cipher submission:cipher(c::Char) = isdigit(c) ? c : 'a' + ('z' - c)
- Don't forget you can
strip
trailing characters from a string! - Try the
get!(key, val) do ... end
syntax at some point. - Remember to use
1 + Ctrl+q
on the REPL to jump into the stacktrace directly. - Setup vim as default editor with
EDITOR
orJULIA_EDITOR
on theenv
.
filter(isdigit, str)
seems much cleaner thanjoin(isdigit(i) ? i : "" for i in str)
when working with strings. (Exercism phone-number)rot180
function in base exists and helps to solve problems like (Exercism spiral matrix)- Preallocating with
undef
can have huge performance speedups from saving allocations: see (Exercism Pascal's triangle)
# credit to shybyte -
function triangle2(n::Int)
n >= 0 || throw(DomainError())
n > 0 || return []
rows = Array{Array{Int64,1},1}(undef, n)
rows[1] = [1]
@inbounds for row_index in 2:n
previous_row = rows[row_index - 1]
row = Array{Int,1}(undef,row_index)
row[1] = 1
@inbounds for i in 2:row_index-1
row[i] = previous_row[i-1] + previous_row[i]
end
row[row_index] = 1
rows[row_index] = row;
end
rows
end
- Remember the
allunique
function with generators andifs
is super spiffy! (Exercism isogram)
isisogram(s) = allunique(c for c in lowercase(s) if isletterc)
- cmcaine suggested I use
any(rem.(n,(3,5,7) .== 0))
instead of the clunkyif !(n % 3 == 0 || n % 5 == 0 || n % 7 == 0)
on (Exercism Raindrops)
And for a super extra added punch, you can just do
acc = join( s for (d, s) in ((3, "Pling"), (5, "Plang"), (7, "Plong")) if n % d == 0)
acc = "" ? string(n) : acc
- When short-circuiting expressions, remember to use parenthesis for side effects.
# credit to icweave, the solution I wanted to write but couldn't figure out
function raindrops(number::Int)
s = ""
number % 3 == 0 && (s *= "Pling")
number % 5 == 0 && (s *= "Plang")
number % 7 == 0 && (s *= "Plong")
s == "" ? string(number) : s
end
- Remember that when indexing into strings, you get out chars. Gotta be careful when pulling out the
s = bitstring(3)
and checking for'1' == s[1]
and not for"1"
. using Base.Cartesian; @nexprs i 10 -> x_i = A[i]
for defining 10 variables at a time- Remember to use non-standard string literals like r" and friends"
macro r_str(p)
Regex(p)
end
And also the read only byte array:
julia> x = b"123"
3-element Base.CodeUnits{UInt8,String}:
0x31
0x32
0x33
- Macros MUST return expressions, which are then evaluated by the compiler
julia> macro sayhello(name)
return :( println("Hello, ", $name) )
end
- Simeon Schaub has a nasty way of building up an empty named tuple...
(;()...)
. Until we get(;)
that is.
-
Chris goes all out on DiffEq philosophy and what separates it from the others
I think the main things are:
- Refinement of standard tools (performance and new algorithms)
- Integrated analysis (parameter estimation, uncertainty quanitifcation)
- Expanded problem domain (stochasticity, discrete stochastic jump equations, random ordinary differential equations, etc.) Most of the ecosystems before focused on just solving ODEs with adaptive timestepping In fact, the vast majority of codes were the same wrapped Fortran codes (dopri, Sundials, and LSODA which is actually just an old version of Sundials) however, those were all written in the 80's and 90's and there has been a lot of research, not to mention a small change in computer architecture, since that happened so along the lines of one: use newer algorithms, make them internally SIMD and multithread, etc. along the lines of two: there have been some "addons" or external software in MATLAB for doing this, but it required "choosing" a specific type of ODE solver. Now, you can give it generically any DE solver for any kind of DE, and you can do extended analysis on it with the built-in tools and 3, there really isn't a comprehensive suite which handles the wide variety of differential equations MATLAB does ODEs, DAEs and DDEs (delay differential equations). BVPs too some codes here and there do versions of tha tas well Sundials and thus SciPy does ODEs and DAEs, then they have a separate wrapper for BVPs R wraps essentially the same codes as SciPy none of these really handle SDEs Christopher Rackauckas @ChrisRackauckas 12:55 and the specialized extra packages that do handle SDEs are not well maintained and don't implement many algorithms so definitely including random equations (RODEs, SDEs) is unique. Then tying this all to PDEs Of course, we wrap all of the same algorithms the other suites do, but we don't use them as the core. We use them as bait in benchmarks And I think that's essentially the summary Christopher Rackauckas @ChrisRackauckas 12:56 If you look at some extended diffeq feature in another language say parameter estimation you can pull up how to do it in Python http://stackoverflow.com/questions/11278836/fitting-data-to-system-of-odes-using-python-via-scipy-numpy http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html but if you want to do it in Julia, we have a built in function that does exactly what the top searches are doing http://docs.juliadiffeq.org/latest/analysis/parameter_estimation.html#lm_fit-1 but recommend you don't use it because it's a really bad method, and instead point you to more advanced methods Seeing things like that (everyone re-implementing Levenburg-Marquedt parameter estimation for each new project, instead of developing a suite of well-tested more advanced methods) pisses me off so much that I had to do it.
-
Found a super handy guide for finite difference methods as a review
-
Read the Fornberg - Calculating weights in finite differences formulas paper. Cool insights by hand.
-
Chris recommends lyx.org for Latex stuff, and we learned about why kwargs are slow and heap allocated arrays.Some of the design philosophy for DiffEqs.jl is here.
Rosenbrock methods for stiff differential equations are a family of multistep methods for solving ordinary differential equations that contain a wide range of characteristic timescales.[1][2] They are related to the implicit Runge-Kutta methods[3] and are also known as Kaps-Rentrop methods
- Read the 3 Shampine Papers, as well as Stroustrup's paper on CS education/curriculum.
- Chris gives us an awesome lecture on mathematical history of numerial methods
yeah, might as well be it's like LSODA people like it because of history, and the idea in reality, Sundials decended from the Petzold algorithms (LSODA) early Sundials had stiffness detection and switching, because the code was essentially LSODA they took it out because it slowed things down (the detection on multistep methods can be costly)
[Since Shampine wrote this, the authors of NR have consulted a worker active in the field. Unfortunately, a great many other experts in the field consider the advice they got to be very poor indeed -- extrapolation methods are almost always substantially inferior to Runge-Kutta, Taylor's series, or multistep methods.] There's a better way of thinking about extrapolation techniques essentially, they are a bad arbitrary order Runge-Kutta method If you pick an order then at that order it actually is an Runge-Kutta method if you check the number of steps, there are far more steps than optimal. And if you check the coefficient of the highest order error term, its really bad so it's essentially just a series of bad RK methods. however, if your necessary tolerance is really low, then a higher order RK method will always do better but the definition of "low" seems to be really low like, 5th order does well until like 1e-5, then something like 8th order does well until you need sub 1e-16 accuracy, 14th order still beats extrapolation even at accuracies like 1e-40
a recent result by Gyongy is a BS technique for parabolic SPDEs which shows that higher order methods can achieve arbitrary accuracy (easy way to prove that) but it still doesn't break the Kloeden computational barrier. (accuracy vs computational cost metric in SPDEs)
Random Differential Equations - wait for Kloeden's book.
• Davie & Gaines (2000) showed that any numerical scheme applied to the SPDE (6) with f = 0 which uses only values of the noise Wt cannot converge faster than the rate of 1/6 with respect to the computational cost
https://www.researchgate.net/publication/314242937_Random_Ordinary_Differential_Equations_and_their_Numerical_Solution you can find the thesis if you look hard enough it's online it has a bunch of methods for higher order RODEs and I'll get around to implementing it Oh, I stored the reference in the issue discussing RODEs in diffeq SciML/DifferentialEquations.jl#145 http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/40146
if you want the most modern "book" on the subject, this is it: http://www.math.uni-luebeck.de/mitarbeiter/roessler/publikationen-e.php http://epubs.siam.org/doi/10.1137/09076636X that being the main article I think or this review: http://www.math.uni-hamburg.de/research/papers/prst/prst2010-02.pdf with the latest development of course being http://chrisrackauckas.com/assets/Papers/ChrisRackauckas-AdaptiveSRK.pdf
the field diverged a bit essentially because it is really hard to beat Euler's method and so mathematicians still study it a little bit but pushed forward to new equations like RODEs and SPDEs where some semblance of "better than Euler" is just coming out while the more applied communities went to solving discrete stochastic (Gillespie) methods ftp://www.cs.toronto.edu/na/reports/NA.Bio.Kin.Survey.rev1.pdf (pst! If anyone wants to join in with implementing these algorithms... SciML/JumpProcesses.jl#8 )
where if you estimate "lots of proteins", you have ODEs "a good amount" you have SDEs "very small", then you don't have concentrations any more, but discrete numbers but you can still write out stochastic simulations (Gillespie's algorithm) but the breakthrough was the convergence results to SDEs and then Burrage's Poisson Runge-Kutta methods (and Gillespie+ Petzold's tau-leaping) essentially, a type of Runge-Kutta that works in the case where the numbers of things are discrete and stochastic. There's still a shit ton to work out there but Kloeden's RODE work is interesting, because technically those discrete stochastic equations are RODEs so higher order methods for RODEs work even with discrete variables and other nonsense.
- Added a whole bunch of parallel/distributed/performance resources.
- Arch's slides, huge comment and recommended reading
- Loop fusion notebook and blog post
- Cache management
- Multidimensional arrays and iteration
- Do the damn manual
- v0.5 Highlights
- Parallel Accelerator
-
OK, turns out Wonseok Shin is a Numerical Electromagnetism Beast. Casually asked him for a numerical analysis course notes. Casual. Sweet.
-
John Myles White is THE MAN for statistics and hopefully know a bit more about cache management in Julia.
- Read through the monster issue on multithreading. Looks like Cilk is the way to go :D
I am not fond of the barrier-style of programming made popular by OpenMP for writing resuable components, because barriers introduce composability issues. The problem is that the barrier style fundamentally assumes that you know how many parallel threads of execution you have at the point of starting parallel work. Finding out the answer to that question at any moment in a large system (where other components vie for threads) is exactly the sort of global question that programmers learn not to ask in a scalable parallel program. Put another way, the "control freak" style of parallel programming ("I totally control the machine") delivers great benchmark results. But you can't marry two control freaks. -- Arch D Robison, 2014
- Arch Robison is a legend and aswers emails like a boss.
-
Organized some PathToPerformance stuff wrt courses and links and bibliography. Happily, Arch D. Robison's course on parallelism is based on Cilk+ - which Julia is based on. The cosmos smiles at me... Quote of the day: "Your computer school was weak - I learned from the streets!"
-
Helped Chris out with some PyCall stuff that they broke. Submitted a PR for a CONTRIBUTING.md to eschnetter/FunHPC.jl
-
TODO: 100 Julia array comprehensions, 100 metaprogramming, 100 problems, blog comparing MATLAB and julia before and after codes.
-
Almost done configuring Frankie. Investigate Pachyderm.io and Kubernets.io
-
Get my ass on DiffEqs before Chris' Seminar!
-
Must read physics papers includes Alfven, Nobel Prize 1941, made it in a page.
-
@ScottPJones says that strings have not been priority for devs in Julia, but is uniquely suited to handle them in future - delaying this is a problem for data handling.
You can make parameterized string types, that (using traits) deal with all the sorts of encoding issues (big vs. little endian, linear indexed or not, possibly mutable vs. immutable, code unit size, code point range, validated or not, whether it is possible to go backwards (single byte character sets, UTF-8, UTF-16, UTF-32 can, but most legacy multibyte character sets you can't), and have optimized code generated for different string encodings, writing only pretty generic code. (take a look at @nalimilan's https://github.com/nalimilan/StringEncodings.jl for how that might look)
-
Miletus is based on this paper 1 and and this paper by SL Peyton James and J-M Eber et al.
-
Today in chat we talked about the case of Sage - I remember Felix Lawrence talking about it. Sad days. Academia does not take into account CI or software maintenance, just shitty scripts into pushing papers.
Fight or flight? :/ New bucketlist goal: make tears of academic pride flow off a fellow HPCer from proper development practices.
-
Submitted GSoC proposal. Chat was invaluable. Remember Chris' advice about not building a compiler yourself - focus on your thing, let experts do theirs. Balance will be interesting.
-
Read a gazillion Kahan papers. That man is a beast Julia needs a FLOP debugger. Can Gallium pull it off? Also, Gustafson is misguided as heck, UNUMS are not coming anytime soon. Ever. For anything.
-
Classical papers in numerical analysis by Trefethen are here
-
Miletus bug resolved. Properly reported. now use support+juliafin@juliacomputing.com
-
"Mathematics is the art of giving the same name to different things.10 by noted mathematician Henri Poincar´e." - from the Julia paper
-
Julia parallelism method on page 95 of paper
- Never use a darn push array to plot - use array comprehension! Props to Simon Byrne and the Miletus.jl manual.
function payoff_curve(c, d::Date, prices) payoff = [value(GeomBMModel(d, x, 0.0, 0.0, 0.0), c) for x in prices] p = [x.val for x in payoff] r = [x.val for x in prices] return r, p end
- PArallel accelerator talk. Awesome talk. Possibility of Julia compiler, and of running awesome parallelization stuff. Announced multithreading. Jeeeeeeeeeeeeeeeeeeeeeeeeefffffffffff...
-
Enumerate generate a lot of types, says Chris. Use this if you want tons of types for dispatch --> parallelization. Also, Chris says to handle distributed computing via multiple dispatch. Much pro, very wow.
-
Jeff gives a good overview on how and why internals of Julia work the way they do.
-
Jameson Nash explains future avenues of research for Statically compiling Julia and getting a mini hackable version going.
-
Amit Murthy explains a little of how to sping up your own cluster
-
Downloaded free copy of JuliaFIN. Let-s hope beta access means future access for a while, as well as BLPAPI integration.
-
Ctrl-R in REPL helps you find code says KristofferC.
-
Turns out @frederikekre and @KristofferC are behind JuaFEM.jl. Chris says I should consider applying for the DiffEqs.jl project on GSoC. Whelp, let's do it gang!
-
fixed spacing in Chris PR. perhaps a prettyfying script is in order.
-
Read about efficient laziness in the story The tale of the man who has too lazy to fail
-
@ararslan made a kick as commit explaining how to manage the zip function via a recursive algo. Awesome dude
-
chris says MXNet > Mocha.jl
- Grammar/style fixes go in a separate PR.
- sample() from StatsBase is awesome.
- Custom allocators sound like a damn good idea. You basically allow memory pools, and optimize GC or shared memory.
- Rackaukas says about Sundials.jl:
Sundials is good because it's the modern Petzold algorithm i.e. it's the multistep algorithm I say "the" because they are literally all the same algorithm Gear made it in the 70's, then that became vode, then Petzold did a bunch of stuff to make it lsode, modified it for DAEs to get DASSL and DASPK, then it was translated to C as cvode/ida all these years, the same code though there are definitely many things that can be done better, and Sundials is actually quite slow in many cases which is why I plan to at some point make a full adaptive order+timestep multistep method myself... but it's a huge undertaking but... multistep methods minimize function evaluations, so if your PDE is really large, Sundials is the best there is. and since it handles stiff equations, they generally handle most equations (they don't handle large complex eigenvalues well, but still, ehhh) so they are a workhorse of scientific computing for that reason: it may be slow, but it'll always work, no matter how big the problem is, and scales better than other algorithms
- @dextorius says about custom allocators:
Any heap allocation you do from Julia (whenever you create, for instance, a new Vector) is handled by whichever version of malloc Julia uses internally. When I say "a custom allocator" I refer to the ability to instead use a different allocator from the default. In my case, that would mostly be for the purposes of introducing specific memory alignment in some data structures for performance reasons. This is a very common practice in languages like C/C++/D, especially when it comes to numerical code and GPGPU computing.
-
Chris and the gang say that Shampine is the boss and that I should read about the Matlab ODE suite, Solving DDes in Matlab, IRKC and IMEX solver for stiff blah balh. Duly noted.
-
First attempt at a PR. #21208 Github hurts. Remember to add tests. skip CI until last PR.
-
Wong-Zakai theorem for PDEs - 57 page paper.
This is cool code from base/test/linalg/generic.jl
> # test ops on Numbers
for elty in [Float32,Float64,Complex64,Complex128]
a = rand(elty)
@test trace(a) == a
@test rank(zero(elty)) == 0
@test rank(one(elty)) == 1
@test !isfinite(cond(zero(elty)))
@test cond(a) == one(elty)
@test cond(a,1) == one(elty)
@test issymmetric(a)
@test ishermitian(one(elty))
@test det(a) == a
end
-
@KristofferC did an awesome talk on the FEM landscape. Take note of storing plots at 20mins in. PETSc holy grail of what?
-
Jeff B basic talk 2013 on Julia parallelism
-
doing A & 1 will give you booleans
-
Oracle machine is used for complexity proofs in CS
-
Intel new Optane memory announced - SSD killer?
-
- No mode(A) in base Julia?
-
whos()
tells you what is in memory. -
join([string arrays], "delim")
is my friend -
Hackerrank: 211,207
-
Julia Multithreading in the works but is kept secret.
-
_foo
makes a handy internal implementation that uses dispatch. it means don't rely on it. -
FINALLY solved Hackerank 2. FINALLY!
readline(STDIN)
array = [parse(Int,s) for s in split(string(readline(STDIN)))]
print(mapreduce(x -> x, +, array))
-
Chat says Julia is slow at strings and dynamic operations.
-
Aliasing - @mbauman says """ Yes, two objects alias each other when they point to the same memory region. In general, compilers must pessimistically assume that a write to object A may affect object B. So after writing to object B, it must re-load anything from object A before going on since it might have changed What @mbauman said, with the corollary that this forced assumption inhibits a great many optimizations with regards to reordering code, vectorizing, etc.
-
Run time library Julep - BIG deal if you can setup a native Float. GPUs use Float16 - order of magnitude difference or greater.
-
Method of Lines discretize one dimension in a PDE, solve.
-
Julia's GC makes ultra-low-latency a non-starter.
-
Juleps are big big development projects - Julia enhancement proposals.
-
Haswells are new chip design methodologies to improve efficiency of computation and power savings?
-
Skylake is an even better version.
-
Clock multiplying is basically doing more instructions within the same clock cycle
-
Randall Leveque made amazing Claw software for Hyperbolic systems
-
ack
is old school tool for refactoring code = reformatting so as to facilitate everything. -
AVX are super compiler magics to super speed up compiler instructions. Basically SIMD.
-
Added this to my to dos... some MIT book Jeff and gitter really like, Julia manual [Julia for HPC]
-
@edit
takes you to the definition of the source code ! :D -
27:41
...
after a tuple means extract those arguments and call function again on those arguments. -
44:21 - a generated function acts on the typed AST!
-
@KristofferC says For bisecting, you can write a small script that passes / fails depending on what you are testing for, and then let git automatically bisect the whole way to the correct commit, see https://lwn.net/Articles/317154/
-
Tuple types are covariant in their parameters: Tuple{Int} is a subtype of Tuple{Any}.
julia> typeof((1, 0.1)) <: NTuple{2}
false
julia> typeof((1, 0.1)) <: NTuple{2,Any}
true
julia> typeof([1,2]) <: Vector
true
julia> typeof([1,2]) <: Vector{Any}
false
julia> Tuple{Int,Int} <: NTuple{2}
true
julia> Tuple{Int,Float64} <: NTuple{2}
false
julia> Tuple{Int,Float64} <: NTuple{2,Any}
true
-
Function inputs are tuple types, so they have to be covariant e.g. if you have f(::Int, ::Any) and call f(1, 2), dispatch happens based on whether Tuple{Int, Int} <: Tuple{Int, Any}
-
Typevars always get "filled" with concrete types when subtyping, which is another way of saying that {T,T} where T asserts that both type parameters are equal Otherwise, f(x::T, y::T) where T would be the same as f(x::Any, y::Any) Since you could have T = Any
-
Help out CUDANative.jl Help out Dagger. [Help out TensorOperations](
-
Got Git Kraken. Told sister about it. Edit: 30/03 / ohshitgit.com
-
Gitter says
function f(x) function g() if is(x, Int) 1 elseif 0 end end g() end
is the same as type G{T} x::T end; g = G(x)
and defining call on ::G
... @MikeInnes
-
@testnowarn @example
-
@mbauman says NTuple{N} is now the same as NTuple{N,T} where T (this means where T <: Any)
-
git bisecting finds an offending commit. Builds may take time. Beware...
- Read Jeff's PhD thesis. Julia's heart/subtyping algorithm still freakin' hyerogliphical. Will return. Edit: 30/03/17 - subtyping algo explained a bit more in his Julia Internals Juliacon 2015 talk.
0.5. Ran Valgrind on all Julia, ValidatedNumerics.jl, and TaylorSeries.jl source code. No memleaks. Nice.
- @ChrisRackaukas Bikeshedding - arguing about syntax
""" #The term was coined as a metaphor to illuminate Parkinson’s Law of Triviality. Parkinson observed that a committee whose job is to approve plans for a nuclear power plant may spend the majority of its time on relatively unimportant but easy-to-grasp issues, such as what materials to use for the staff bikeshed, while neglecting the design of the power plant itself, which is far more important but also far more difficult to criticize constructively. It was popularized in the Berkeley Software Distribution community by Poul-Henning Kamp[1] and has spread from there to the software industry at large. """
-
tests might be "bleeding" i.e. one test is setting a value that changes another test.
-
The fantastic @ChrisRackaukas post on Software Engineering practices for Julia.
-
Github hooks (in Rule 5) are scripts that run everytime a push happens, so that Continuous Integration and Testing can happen. Travis CI for UNIX, Appveyor for UNIX.
-
Use Digital Object Identifiers to cite source code/databases. Zenodo is recommended, or Figshare.
-
Use Gists that are secret from search engines!
TO DOS
- Stanford SQL
- Numerical Analysis - Homer Reid
- IEEE 754 FLOP arithmetic with exercises!
- Geometric Numerical Integration - Ernst Hairer et al
- Numerical Linear Algebra - Trefethen
- Avoid false sharing - Intel
- [Geometric Numerical Integration - Hairer]
- PDF scheduling and multithreading bonanza as per #21017
- [Deep Learning MIT]
- some MIT book Jeff and gitter really like
- Tony Kelman's talk on CI, and his notebook
- Chris Rackauckas full CI tips implementation on thesis.
- His other awesome post and this one
- Dear lord Chris you are a machine
- To Lisp or not to lisp - Stefan Karpinski
- CUDA, Julia, and Deep Learning
- [Julia - HPC book]
- Arch D robison parallel computing book
- Study Julia source code from Plots.jl and DifferentialEquations.jl
- Workshop 1 update to Arch Robinson?
Econ/Finance
- Plots.jl
- https://github.com/JuliaParallel/Dagger.jl 3.https://github.com/IntelLabs/ParallelAccelerator.jl 4.https://github.com/pluskid/Mocha.jl
- https://github.com/IntelLabs/HPAT.jl
- https://juliaplots.github.io/ Spark
- Workshops 1
- Workshop 2
- [Workshop 3]
Julia to do's
- Jeff Bezanson - theory update on Julia's type inference algorithm since conception
- Latex in Documenter?
- WTF is a tuple bitstype bufferio AbstractArray generalizedcartesianindex
- Julia Typesupertree
- WTF are xxx calls?
- Lifting
- Bottom is the least common Type
- promotion rules
- splatting = pass a structure of n values as n separate args to a function
- linear algebra matrix type definitions andreas noack
- Introducing Julia/types
- 100 julia exercises
- numerical recipes in Julia
- distributed recipes in Julia
- metaprogramming exercises - https://discourse.julialang.org/t/simple-metaprogramming-exercises-challenges/731
- Help Sanders
- Help Benet
- Help Mendezs matrix playground - subtypes(AbstractArray)
- read floating point standard 754 and exercicses
- read 1788 standard and exercicses
- splatting
- matrix playground