alternative webpage: http://people.ricam.oeaw.ac.at/k.schnass/index.html
uaaaa, seems we finally have a webpage! I finished the theory part of the 'non-tight dl' paper, so all that is missing, is the decoration, ie. intro, sexy curves, etc. - highly motivating tasks => the SPARS-trailer is now up and running, and I decided to switch to latex for slides, not to worry Johnny Depp is already included.
Another year older ...the new paper 'learning non-tight dictionaries or so' is coming along well not nicely but sweatily. I made a hopefully entertaining trailer for the talk at SPARS, I still have to edit it though. To prepare in the meantime I recommend watching Ratatouille and reading some dictionary identification papers, e.g. ksvd, l1 or noisy l1.
All my conference papers got accepted and the FWF agreed to pay, hurray, that means one week of holiday in Bremen at SampTA13 and two weeks in Lausanne at SPARS13 and Enumath13 (these two weeks are an especially hard sacrifice :D). I'm allowed to talk about dictionary learning via K-SVD at all 3 of them, so if you don't want to read the paper, you have the chance July 4 at 18.10 in Bremen and July 9 at 14.50 in Lausanne.
Ajo, the K-SVD paper is almost finished, including final sample size results, just missing a sexy little graph. While running simulations and wondering about employment after the current project, I realised the following (WARNING: this is how to lie with statistics!): So far 3 of my co-authors (Holger, Remi, Massimo) have won ERC-Starting grants after writing a paper with me. That is 50% (Boris, Jan and Pierre have not won, ...yet). Conclusion, if you want to increase your chances, write me an email and we'll do a paper together :D.
After a week of unmotivation and brooding I decided to withdraw and rework the paper to include the finite sample size, because "Thou shalt not publish rubbish!" ...even if you sorely need publications in order to get a permanent job :D. A preview on how the full thing will look like can be found here.
After nearly 2 years I finally finished the dictionary learning paper, which turned out to be asymptotic results about K-SVD. Unfortunately in my desperation to submit and get it out of the way, I overlooked that in conceptually 2 lines (and practical 6-10 pages) it can be extended to the finite sample size. The eyeopener was Remi's new paper about dictionary learning via l_1 minimisation. So I'll do that now and if we are lucky, the reviewers will ask about the final sample size and I'll get the chance to make a looooong but beautiful paper, and if not, all those who are interested will have to read two short homely papers, sorry for that, I'll try to get a first statement ready for SampTA....