<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Senko Rašić</title>
    <link>https://blog.senko.net/</link>
    <description>A practitioning bit-shifting magician turned cat herder</description>
    <pubDate>Fri, 23 Feb 2024 06:13:50 +0000</pubDate>
    <item>
      <title>Curation, filter bubbles, enshittification and information overload</title>
      <link>https://blog.senko.net/curation-filter-bubbles-enshittification-and-information-overload?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Long time ago, information used to be hard to find. You had to go out there and look for it. Nowadays, a lot of information is at our fingertips at any moment.&#xA;&#xA;Now we have the opposite problem: too much information. The challenge today is curation: we want to consume a reasonable amount of information that&#39;s relevant to us.&#xA;&#xA;What&#39;s &#34;reasonable&#34; and what&#39;s &#34;relevant&#34; is tough to define as it is person-specific. What&#39;s reasonable and relevant to you may not be the same as what&#39;s reasonable and relevant to me.&#xA;&#xA;From curation ...&#xA;&#xA;Curation of information is nothing new: newspapers, radio and TV have done it since the beginning. We relied on the media to pick what&#39;s most important and only kept the choice of a source we trusted or liked.&#xA;&#xA;As the Internet grew, the amount of information grew exponentially. One of the reasons for Google&#39;s early success is that it was very good at curating this information. While the other search engines could also find 10,000 pages matching your search, Google was the best at picking what was most useful, informative and relevant.&#xA;&#xA;Other services were also trying their best at curation: Netflix had a great recommendation algorithm, and have for years organized a public competition to optimize it and provide even better recommendations.&#xA;&#xA;.. to filter bubbles&#xA;&#xA;Over the years, as the various algorithms got improved and optimized, they got subjectively worse for us, the users. Google is in arms-race with SEO folk, Facebook prefers to serve you content that will rile you up, Netflix&#39; algorithm seems braindead and the less we talk about YouTube recommendations the better.&#xA;&#xA;Why is this? Turns out that, at some point in their lifecycle, big companies had to choose between optimizing for the user experience and optimizing for revenue. The algorithms improve allright ... but using a different metric.&#xA;&#xA;Google now hyper-optimizes for what it thinks you should see (filter bubble), Facebook serves you whatever will keep you on the site for longer (mind-numbing memes interspersed with viral outrages), and Netflix brings up and center its own content that it would like you to get hooked at (and not cancel the service), not the things you might actually want to see.&#xA;&#xA;Curation for the benefit of the user, has turned to serving targeted content, for the benefit of the company.&#xA;&#xA;Enshittification&#xA;&#xA;Enshittification is an ugly word to describe an ugly thing. Basically, it is a strategy change in an online platform where it switches from user growth (where optimizing user experience is paramount) to revenue growth (where you need to squeeze maximum income from each user).&#xA;&#xA;Enshittification, or platform decay, is not limited only to Internet media companies. But it is especially visible in online startups that grew big by giving away their product or undercharging for it, in order to grow as fast and big as they can.&#xA;&#xA;At some point you got to earn money, and you have to earn as much money as possible, and since you have a mostly-captive audience (there is only one Google, Facebook, YouTube or Twitter), they won&#39;t leave if the user experience is marginally worse.&#xA;&#xA;And so the UX frog gets slowly cooked. People today are hooked on Facebook scrolling, Twitter mob rages, memes everywhere and 10-second dopamine hits on TikTok or YouTube Shorts. It&#39;s the sugar thing all over again.&#xA;&#xA;Information detox&#xA;&#xA;Faced with this, some people try limiting their consumption of this kind of content (me included, see my Junk Social and Digital hygiene posts).&#xA;&#xA;This can work if you&#39;re willing to limit your information access, don&#39;t suffer from the fear of missing out (FOMO) and have the mental strength to not succumb when you&#39;re tired, bored or just open up your favorite social media app on autopilot because it&#39;s on your mobile phone home screen.&#xA;&#xA;But it&#39;s hard work: you&#39;re basically building and maintaining a Great Wall between yourself and most of the modern Internet.&#xA;&#xA;Curation on our terms&#xA;&#xA;Can we do it better? Is there a way to again outsource curation of content that is optimized for us, the users?&#xA;&#xA;Curation is hard work, whether you want to build and maintain sophisticated algorithm or AI to do it, or if you have actual people doing the work. And it&#39;s not entirely obvious how that would work without degenerating into the filter bubbles and social media we have today - that&#39;s how some of those companies started, after all.&#xA;&#xA;As a technologist I do believe a solution is within a realm of possiblity. As someone who&#39;s watched Internet grow from an academia and hobbyst garden into what it is today, I am skeptical we&#39;ll reach that solution (and if you really want to be scared about the prospects of it, go read The Master Switch).&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Long time ago, information used to be hard to find. You had to go out there and look for it. Nowadays, a lot of information is at our fingertips at any moment.</p>

<p>Now we have the opposite problem: too much information. The challenge today is curation: we want to consume a reasonable amount of information that&#39;s relevant to us.</p>

<p>What&#39;s “reasonable” and what&#39;s “relevant” is tough to define as it is person-specific. What&#39;s reasonable and relevant to you may not be the same as what&#39;s reasonable and relevant to me.</p>

<h2 id="from-curation" id="from-curation">From curation ...</h2>

<p>Curation of information is nothing new: newspapers, radio and TV have done it since the beginning. We relied on the media to pick what&#39;s most important and only kept the choice of a source we trusted or liked.</p>

<p>As the Internet grew, the amount of information grew exponentially. One of the reasons for Google&#39;s early success is that it was very good at curating this information. While the other search engines could also find 10,000 pages matching your search, Google was the best at picking what was most useful, informative and relevant.</p>

<p>Other services were also trying their best at curation: Netflix had a great recommendation algorithm, and have for years organized a public competition to optimize it and provide even better recommendations.</p>

<h2 id="to-filter-bubbles" id="to-filter-bubbles">.. to filter bubbles</h2>

<p>Over the years, as the various algorithms got improved and optimized, they got subjectively <em>worse</em> for us, the users. Google is in arms-race with SEO folk, Facebook prefers to serve you content that will rile you up, Netflix&#39; algorithm seems braindead and the less we talk about YouTube recommendations the better.</p>

<p>Why is this? Turns out that, at some point in their lifecycle, big companies had to choose between optimizing for the user experience and optimizing for revenue. The algorithms improve allright ... but using a different metric.</p>

<p>Google now hyper-optimizes for what it thinks you should see (filter bubble), Facebook serves you whatever will keep you on the site for longer (mind-numbing memes interspersed with viral outrages), and Netflix brings up and center its own content that it would like you to get hooked at (and not cancel the service), not the things you might actually want to see.</p>

<p>Curation for the benefit of the user, has turned to serving targeted content, for the benefit of the company.</p>

<h2 id="enshittification" id="enshittification">Enshittification</h2>

<p><a href="https://en.wikipedia.org/wiki/Enshittification" rel="nofollow">Enshittification</a> is an ugly word to describe an ugly thing. Basically, it is a strategy change in an online platform where it switches from user growth (where optimizing user experience is paramount) to revenue growth (where you need to squeeze maximum income from each user).</p>

<p>Enshittification, or platform decay, is not limited only to Internet media companies. But it is especially visible in online startups that grew big by giving away their product or undercharging for it, in order to grow as fast and big as they can.</p>

<p>At some point you got to earn money, and you have to earn as much money as possible, and since you have a mostly-captive audience (there is only one Google, Facebook, YouTube or Twitter), they won&#39;t leave if the user experience is marginally worse.</p>

<p>And so the UX frog gets slowly cooked. People today are hooked on Facebook scrolling, Twitter mob rages, memes everywhere and 10-second dopamine hits on TikTok or YouTube Shorts. It&#39;s the sugar thing all over again.</p>

<h2 id="information-detox" id="information-detox">Information detox</h2>

<p>Faced with this, some people try limiting their consumption of this kind of content (me included, see my <a href="https://blog.senko.net/junk-social" rel="nofollow">Junk Social</a> and <a href="https://blog.senko.net/digital-hygiene" rel="nofollow">Digital hygiene</a> posts).</p>

<p>This can work if you&#39;re willing to limit your information access, don&#39;t suffer from the fear of missing out (FOMO) and have the mental strength to not succumb when you&#39;re tired, bored or just open up your favorite social media app on autopilot because it&#39;s on your mobile phone home screen.</p>

<p>But it&#39;s hard work: you&#39;re basically building and maintaining a Great Wall between yourself and most of the modern Internet.</p>

<h2 id="curation-on-our-terms" id="curation-on-our-terms">Curation on our terms</h2>

<p>Can we do it better? Is there a way to again outsource curation of content that is optimized for us, the users?</p>

<p>Curation is hard work, whether you want to build and maintain sophisticated algorithm or AI to do it, or if you have actual people doing the work. And it&#39;s not entirely obvious how that would work without degenerating into the filter bubbles and social media we have today – that&#39;s how some of those companies started, after all.</p>

<p>As a technologist I do believe a solution is within a realm of possiblity. As someone who&#39;s watched Internet grow from an academia and hobbyst garden into what it is today, I am skeptical we&#39;ll reach that solution (and if you <em>really</em> want to be scared about the prospects of it, go read <a href="https://www.penguinrandomhouse.com/books/194417/the-master-switch-by-tim-wu/" rel="nofollow">The Master Switch</a>).</p>
]]></content:encoded>
      <guid>https://blog.senko.net/curation-filter-bubbles-enshittification-and-information-overload</guid>
      <pubDate>Sun, 01 Oct 2023 15:05:47 +0000</pubDate>
    </item>
    <item>
      <title>Learn AI</title>
      <link>https://blog.senko.net/learn-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A creative robot&#xA;&#xA;Any sufficiently advanced technology is indistinguishable from magic.&#xA;&#xA;Modern AIs and ChatGPT in particular look like magic to many people. This can lead to misunderstanding about their strengths and weaknesses, and a lot of unsubstantiated hype.&#xA;&#xA;Learning about a technology is the best antidote. For curious computer scientists, software engineers and anyone else who isn&#39;t afraid of digging a bit deeper, I&#39;ve compiled a list of useful resources on the topic.&#xA;&#xA;This is basically my reading/watching list, organized from more fundamental or beginner friendly to the latest advances in the field. It&#39;s not exhaustive, but should give you (and me) enough knowledge to continue exploring and experimenting on your own.&#xA;&#xA;General overviews&#xA;&#xA;If you don&#39;t have a lot of time or don&#39;t know if you want to dedicate effort in learning the ins-and-outs of modern AIs, watch these first to give you a general overview:&#xA;&#xA;The busy person&#39;s intro to LLMs (for text-generation and chatbot AIs)&#xA;Diffusion models explained in 4 difficulty levels (for image generation AIs)&#xA;&#xA;Fundamentals of neural networks&#xA;&#xA;The videos here provide both teorethical and hands-on introduction to the fundamentals of neural networks.&#xA;&#xA;MIT Introduction to Deep Learning&#xA;&#xA;A good theoretical intro is MIT&#39;s 6.S191 class lectures, especially the Introduction to Deep Learning and Recurrent Neural Networks, Transformers and Attention.&#xA;&#xA;These overview lectures briefly introduce all the major elements and algorithms involved in creating and training neural networks. I don&#39;t think they works on their own (unless you&#39;re a student there, do all the in-class excercises, etc), but it&#39;s a good place to start with.&#xA;&#xA;The topics discussed here will probably make your head spin and it won&#39;t be clear at all how to apply them in real life, but this will give you the lay of the land and prepare you for practical dive-in with, for example, Andrej&#39;s &#34;Zero to Hero&#34;. &#xA;&#xA;The example code slides use TensorFlow. Since Andrej&#39;s course uses PyTorch, going through both sets of lectures will expose you to two most popular deep learning libraries.&#xA;&#xA;Neural Networks: Zero to Hero&#xA;&#xA;An awesome practical intro is the Neural Networks: Zero to Hero course by Andrej Karpathy (he also did the busy person&#39;s intro to LLMs linked above).&#xA;&#xA;Andrej starts out slowly, by spelling out the computation involved in forward and backward passes of the neural network, and then gradually builds up to a single neuron, a single-layer network, multi-layer perceptron, deep networks and finally transformers (like GPT).&#xA;&#xA;Throughout this, he introduces and uses tools like PyTorch (library for writing neural networks), and Jupyter Notebook, and Google Collab. Importantly, he first introduces and implements a concept manually, and only later switches to a PyTorch API that provides the same thing.&#xA;&#xA;The only part where things look a bit rushed is the (currently) last - GPT. There&#39;s so much ground to cover there that Andrej skips over some parts (like the Adam optimization algorithm) and quickly goes over the others  (self-attention, cross-attention).&#xA;&#xA;Overall a great guide. You only need to know the basics of Python, not be afraid of math (the heaviest of which is matrix multiplication which is spelled out), and do the excercises (code along the videos) without skipping the videos that don&#39;t seem exciting.&#xA;&#xA;Understanding Word2vec&#xA;&#xA;Both the MIT and Andrej&#39;s lectures touch on embeddings (the way to turn words into numbers that a neural net can use) only lightly. To deepen your understanding, Illustrated word2vec article explains word2vec+, a popular word embedding algorithm, step by step. It also features a video explanation for those that prefer it to text.&#xA;&#xA;Another good lecture on the topic is Understanding Word2vec.&#xA;&#xA;CNNs, autoencoders and GANs&#xA;&#xA;The MIT lectures mention earlier also contain lessons on Convolutional Neural Networks, autoencoders and GANs, which are important building blocks in neural networks used in vision.&#xA;&#xA;Again, these are high level overviews and although formulas are present, the lectures more give an overview of the algorithms without going into too much detail. That makes them ideal prequel to the Practical Deep Learning course by Fast.ai.&#xA;&#xA;Diffusion models&#xA;&#xA;Diffusion models build on top of CNNs to create image-generating and manipulating AI models. Beyond the general overview linked earlier, the Introduction to Diffusion Models for Machine Learning is a deep dive into exactly how they work.&#xA;&#xA;Practical Deep Learning&#xA;&#xA;Practical Deep Learning is a free course by Fast.ai that has (current count) 25 lectures covering both high-level practical parts of neural networks and the underlying fundamentals.&#xA;&#xA;In particular in Part 2 they cover &#34;zero to hero&#34; on Stable Diffusion, a powerful image-generation AI model.&#xA;&#xA;Large Language Models&#xA;&#xA;These resources go in-depth about constructing and using large language models (like GPT):&#xA;&#xA;Transformers&#xA;&#xA;Andrej&#39;s course goes over the transformer (building blocks of GPT) architecture, but the complexity makes it easy to get lost at first pass. To solidify your understanding of the topic, these two are super useful:&#xA;&#xA;The Illustrated Transformer describes the transformer (building blocks of GPT) in detail while avoiding tedious math or programming details. It provides a good intuition into what&#39;s going on (and there&#39;s even an accompanying video you may want to watch as a gentler intro).&#xA;&#xA;Follow that up with The Annotated Transformer which describes the scientific paper that introduced Transformers and implements it in PyTorch. Since it&#39;s 1:1 annotation of the paper, you need a lot of understanding already so only attempt going through this once you&#39;ve watched both Andrej&#39;s course and once you&#39;ve read and understood the Illustrated Transformer.&#xA;&#xA;Reinforcement Learning through Human Feedback&#xA;&#xA;Language models are good at predicting and generating text, which is different from answering questions or having a conversation. RLHF is used to fine tune the models to be able to communicate in this way.&#xA;&#xA;Illustrating Reinforcement Learning through Human Feedback from folks at HuggingFace (an open source AI community) provides a good overview of RLHF. They also did a webinar based on it (video is the complete webinar, link jumps directly to start of RLHF description) based on the blog post.&#xA;&#xA;If you want to dive deeper, here&#39;s the InstructGPT paper from OpenAI, which basically describes the method they used to create ChatGPT out of GPT3 (InstructGPT was a research precursor to ChatGPT).&#xA;&#xA;Fine-tuning&#xA;&#xA;Fine-tuning allows us to refine or specialize an already (pre)-trained LLM to be better at a specific task (RLHF is one example).&#xA;&#xA;Sebastian Rashka&#39;s Finetuning Large Language Models explains a few common aproaches to finetuning, with code examples using PyTorch. He follows that up with Understanding Parameter-Efficient LLM Finetuning, a blog post discussing ways to lower the number of parameters required, and an in-depth article about Parameter-Efficient LLM Finetuning with Low-Rank Adaptation (LoRA).&#xA;&#xA;Full courses&#xA;&#xA;If you want a really deep dive (undergrad or higher level), follow these courses including* doing the excercises / playing around with the code:&#xA;&#xA;Neural Networks Zero to Hero (Andrej Karpathy)&#xA;Introduction to Deep Learning (MIT)&#xA;Introduction to Deep Learning (Sebastian Raschka)&#xA;Practical Deep Learning for Coders (Fast.ai)&#xA;&#xA;---&#xA;&#xA;This is a living document (ie. it&#39;s a work in progress and always will be). Come back in a few weeks and check if there&#39;s anything new.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://s3.amazonaws.com/vault.senko.net/blog/robot-writing.jpg" alt="A creative robot"/></p>

<p>Any sufficiently advanced technology is indistinguishable from magic.</p>

<p>Modern AIs and ChatGPT in particular look like magic to many people. This can lead to misunderstanding about their strengths and weaknesses, and a <em>lot</em> of unsubstantiated hype.</p>

<p>Learning about a technology is the best antidote. For curious computer scientists, software engineers and anyone else who isn&#39;t afraid of digging a bit deeper, I&#39;ve compiled a list of useful resources on the topic.</p>

<p>This is basically my reading/watching list, organized from more fundamental or beginner friendly to the latest advances in the field. It&#39;s not exhaustive, but should give you (and me) enough knowledge to continue exploring and experimenting on your own.</p>

<h2 id="general-overviews" id="general-overviews">General overviews</h2>

<p>If you don&#39;t have a lot of time or don&#39;t know if you want to dedicate effort in learning the ins-and-outs of modern AIs, watch these first to give you a general overview:</p>
<ul><li><a href="https://www.youtube.com/watch?v=zjkBMFhNj_g" rel="nofollow">The busy person&#39;s intro to LLMs</a> (for text-generation and chatbot AIs)</li>
<li><a href="https://www.youtube.com/watch?v=yTAMrHVG1ew" rel="nofollow">Diffusion models explained in 4 difficulty levels</a> (for image generation AIs)</li></ul>

<h2 id="fundamentals-of-neural-networks" id="fundamentals-of-neural-networks">Fundamentals of neural networks</h2>

<p>The videos here provide both teorethical and hands-on introduction to the fundamentals of neural networks.</p>

<h3 id="mit-introduction-to-deep-learning" id="mit-introduction-to-deep-learning">MIT Introduction to Deep Learning</h3>

<p>A good theoretical intro is MIT&#39;s 6.S191 class lectures, especially the <a href="https://www.youtube.com/watch?v=QDX-1M5Nj7s&amp;list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI" rel="nofollow">Introduction to Deep Learning</a> and <a href="https://www.youtube.com/watch?v=ySEx_Bqxvvo&amp;list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI" rel="nofollow">Recurrent Neural Networks, Transformers and Attention</a>.</p>

<p>These overview lectures briefly introduce all the major elements and algorithms involved in creating and training neural networks. I don&#39;t think they works on their own (unless you&#39;re a student there, do all the in-class excercises, etc), but it&#39;s a good place to start with.</p>

<p>The topics discussed here will probably make your head spin and it won&#39;t be clear at all how to apply them in real life, but this will give you the lay of the land and prepare you for practical dive-in with, for example, Andrej&#39;s “Zero to Hero”.</p>

<p>The example code slides use TensorFlow. Since Andrej&#39;s course uses PyTorch, going through both sets of lectures will expose you to two most popular deep learning libraries.</p>

<h3 id="neural-networks-zero-to-hero" id="neural-networks-zero-to-hero">Neural Networks: Zero to Hero</h3>

<p>An awesome practical intro is the <a href="https://karpathy.ai/zero-to-hero.html" rel="nofollow">Neural Networks: Zero to Hero</a> course by <a href="https://karpathy.ai/" rel="nofollow">Andrej Karpathy</a> (he also did the busy person&#39;s intro to LLMs linked above).</p>

<p>Andrej starts out slowly, by spelling out the computation involved in forward and backward passes of the neural network, and then gradually builds up to a single neuron, a single-layer network, multi-layer perceptron, deep networks and finally transformers (like GPT).</p>

<p>Throughout this, he introduces and uses tools like PyTorch (library for writing neural networks), and Jupyter Notebook, and Google Collab. Importantly, he first introduces and implements a concept <em>manually</em>, and only later switches to a PyTorch API that provides the same thing.</p>

<p>The only part where things look a bit rushed is the (currently) last – GPT. There&#39;s so much ground to cover there that Andrej skips over some parts (like the Adam optimization algorithm) and quickly goes over the others  (self-attention, cross-attention).</p>

<p>Overall a great guide. You only need to know the basics of Python, not be afraid of math (the heaviest of which is matrix multiplication which is spelled out), and do the excercises (code along the videos) without skipping the videos that don&#39;t seem exciting.</p>

<h3 id="understanding-word2vec" id="understanding-word2vec">Understanding Word2vec</h3>

<p>Both the MIT and Andrej&#39;s lectures touch on embeddings (the way to turn words into numbers that a neural net can use) only lightly. To deepen your understanding, <a href="https://jalammar.github.io/illustrated-word2vec/" rel="nofollow">Illustrated word2vec</a> article explains *word2vec+, a popular word embedding algorithm, step by step. It also features a video explanation for those that prefer it to text.</p>

<p>Another good lecture on the topic is <a href="https://www.youtube.com/watch?v=QyrUentbkvw" rel="nofollow">Understanding Word2vec</a>.</p>

<h3 id="cnns-autoencoders-and-gans" id="cnns-autoencoders-and-gans">CNNs, autoencoders and GANs</h3>

<p>The MIT lectures mention earlier also contain lessons on <a href="https://www.youtube.com/watch?v=NmLK_WQBxB4&amp;list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI" rel="nofollow">Convolutional Neural Networks</a>, <a href="https://www.youtube.com/watch?v=3G5hWM6jqPk&amp;list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI" rel="nofollow">autoencoders and GANs</a>, which are important building blocks in neural networks used in vision.</p>

<p>Again, these are high level overviews and although formulas are present, the lectures more give an overview of the algorithms without going into too much detail. That makes them ideal prequel to the Practical Deep Learning course by Fast.ai.</p>

<h2 id="diffusion-models" id="diffusion-models">Diffusion models</h2>

<p>Diffusion models build on top of CNNs to create image-generating and manipulating AI models. Beyond the general overview linked earlier, the <a href="https://www.assemblyai.com/blog/diffusion-models-for-machine-learning-introduction/" rel="nofollow">Introduction to Diffusion Models for Machine Learning</a> is a deep dive into exactly how they work.</p>

<h3 id="practical-deep-learning" id="practical-deep-learning">Practical Deep Learning</h3>

<p><a href="https://course.fast.ai/" rel="nofollow">Practical Deep Learning</a> is a free course by Fast.ai that has (current count) 25 lectures covering both high-level practical parts of neural networks and the underlying fundamentals.</p>

<p>In particular in <a href="https://www.youtube.com/watch?v=_7rMfsA24Ls&amp;list=PLfYUBJiXbdtRUvTUYpLdfHHp9a58nWVXP" rel="nofollow">Part 2</a> they cover “zero to hero” on Stable Diffusion, a powerful image-generation AI model.</p>

<h2 id="large-language-models" id="large-language-models">Large Language Models</h2>

<p>These resources go in-depth about constructing and using large language models (like GPT):</p>

<h3 id="transformers" id="transformers">Transformers</h3>

<p>Andrej&#39;s course goes over the transformer (building blocks of GPT) architecture, but the complexity makes it easy to get lost at first pass. To solidify your understanding of the topic, these two are super useful:</p>

<p><a href="https://jalammar.github.io/illustrated-transformer/" rel="nofollow">The Illustrated Transformer</a> describes the transformer (building blocks of GPT) in detail while avoiding tedious math or programming details. It provides a good intuition into what&#39;s going on (and there&#39;s even an accompanying video you may want to watch as a gentler intro).</p>

<p>Follow that up with <a href="http://nlp.seas.harvard.edu/annotated-transformer/" rel="nofollow">The Annotated Transformer</a> which describes the scientific paper that introduced Transformers and implements it in PyTorch. Since it&#39;s 1:1 annotation of the paper, you need a lot of understanding already so only attempt going through this once you&#39;ve watched both Andrej&#39;s course and once you&#39;ve read and understood the Illustrated Transformer.</p>

<h3 id="reinforcement-learning-through-human-feedback" id="reinforcement-learning-through-human-feedback">Reinforcement Learning through Human Feedback</h3>

<p>Language models are good at predicting and generating text, which is different from answering questions or having a conversation. RLHF is used to fine tune the models to be able to communicate in this way.</p>

<p><a href="https://huggingface.co/blog/rlhf" rel="nofollow">Illustrating Reinforcement Learning through Human Feedback</a> from folks at <a href="https://huggingface.co/" rel="nofollow">HuggingFace</a> (an open source AI community) provides a good overview of RLHF. They also did a <a href="https://www.youtube.com/live/2MBJOuVq380?t=496" rel="nofollow">webinar based on it</a> (video is the complete webinar, link jumps directly to start of RLHF description) based on the blog post.</p>

<p>If you want to dive deeper, here&#39;s the <a href="https://arxiv.org/abs/2203.02155" rel="nofollow">InstructGPT paper</a> from OpenAI, which basically describes the method they used to create ChatGPT out of GPT3 (InstructGPT was a research precursor to ChatGPT).</p>

<h3 id="fine-tuning" id="fine-tuning">Fine-tuning</h3>

<p>Fine-tuning allows us to refine or specialize an already (pre)-trained LLM to be better at a specific task (RLHF is one example).</p>

<p>Sebastian Rashka&#39;s <a href="https://magazine.sebastianraschka.com/p/finetuning-large-language-models" rel="nofollow">Finetuning Large Language Models</a> explains a few common aproaches to finetuning, with code examples using PyTorch. He follows that up with <a href="https://magazine.sebastianraschka.com/p/understanding-parameter-efficient" rel="nofollow">Understanding Parameter-Efficient LLM Finetuning</a>, a blog post discussing ways to lower the number of parameters required, and an in-depth article about <a href="https://lightning.ai/pages/community/tutorial/lora-llm/" rel="nofollow">Parameter-Efficient LLM Finetuning with Low-Rank Adaptation (LoRA)</a>.</p>

<h2 id="full-courses" id="full-courses">Full courses</h2>

<p>If you want a <em>really</em> deep dive (undergrad or higher level), follow these courses <em>including</em> doing the excercises / playing around with the code:</p>
<ul><li><a href="https://karpathy.ai/zero-to-hero.html" rel="nofollow">Neural Networks Zero to Hero</a> (Andrej Karpathy)</li>
<li><a href="http://introtodeeplearning.com/" rel="nofollow">Introduction to Deep Learning</a> (MIT)</li>
<li><a href="https://sebastianraschka.com/blog/2021/dl-course.html" rel="nofollow">Introduction to Deep Learning</a> (Sebastian Raschka)</li>
<li><a href="https://course.fast.ai/" rel="nofollow">Practical Deep Learning for Coders</a> (Fast.ai)</li></ul>

<hr/>

<p>This is a living document (ie. it&#39;s a work in progress and always will be). Come back in a few weeks and check if there&#39;s anything new.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/learn-ai</guid>
      <pubDate>Fri, 14 Apr 2023 05:36:09 +0000</pubDate>
    </item>
    <item>
      <title>Why you don&#39;t need to worry about AI as a programmer</title>
      <link>https://blog.senko.net/why-you-dont-need-to-worry-about-ai-as-a-programmer?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[ChatGPT and other AIs are all the rage and I see many (mostly junior) programmers worrying if the market for devs is going to dry up. &#xA;&#xA;You don&#39;t need to worry, and here&#39;s why.&#xA;&#xA;ChatGPT is scarily good. It really is. No it&#39;s not better at web search than Google (yet), and it&#39;s nowhere near being sentient. But for the tasks where it makes sense, it&#39;s very good. So I&#39;m not going to tell you that you don&#39;t need to worry because it&#39;s a useless tool.&#xA;&#xA;ChatGPT (and other AIs) is a tool, in the same way a calculator is a tool or a compiler is a tool. The word &#34;calculator&#34; used to refer to people doing number crunching. My first calculator was a handheld device. Now it&#39;s just an app. Yes, calculators, the people, lost their &#34;job&#34; doing mind-numbing number crunching, but they were able to work on more interesting problems in math, physics, or what have you.&#xA;&#xA;Same with compilers. When compilers were invented, people were furious that someone thought a machine could do a better job than an expert programmer at crafting assembly code. Today almost nobody writes directly in assembly, except in rare cases where that makes sense. But &#34;assembler programmers&#34; didn&#39;t lose their jobs, they just became &#34;C programmers&#34; or &#34;Lisp programmers&#34;.&#xA;&#xA;It is the same with the new AI models. They are very effective on a whole other level than just number crunching or compiling software, but at the end of the day they&#39;re just tools, like programming languages, libraries or APIs.&#xA;&#xA;If you&#39;ve been a programmer for more than a few years, you know you always need to learn new stuff and stay on top. You&#39;ll need to invest some time to learn what ChatGPT and others can do, or can&#39;t. What you don&#39;t want to do is completely ignore the trend, or (equally bad) think it&#39;ll solve all your problems (or put you out of business). And as John Carmack said in a recent tweet keep your eyes on the delivered value and don&#39;t over focus on the specifics of the tools.&#xA;&#xA;I&#39;ve been a programmer for some 30 years (20 or so professionally) and my usual reaction to new stuff is &#34;oh, so they&#39;re reinventing that particular wheel again&#34;. Yet the current crop of AIs gets me really excited, like I was a kid again uncovering the vast potential of what you can do with a machine that you can order around! I&#39;ve been playing with ChatGPT and it makes me faster and it makes programming (more) fun!&#xA;&#xA;Not by writing my code - I don&#39;t use it like that because it does produce bugs and it can hallucinate stuff. I use it to explore (how I might go about doing X), and to quickly recall something I forgot (how a particular API or library function is used, for example). As Simon Willison (of Django and Datasette fame) puts it AI-enhanced development makes me more ambitious with my projects.&#xA;&#xA;This is just a beginning, and we&#39;re just seeing a boom in integrating these AIs into everything else. Copilot, Bing search, are the big names but people are experimenting with integrations with anything under the sun (I made a service that creates API backends based on your project description, for example). Time will tell which of these will be truly useful, but I have no doubt there will be a lot of them.&#xA;&#xA;AI is a tool, with limitations, but with a lot of potential. It would be a shame not to use it effectively. It won&#39;t put you out of a job, it will make your job better.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>ChatGPT and other AIs are all the rage and I see many (mostly junior) programmers worrying if the market for devs is going to dry up.</p>

<p>You don&#39;t need to worry, and here&#39;s why.</p>

<p>ChatGPT is scarily good. It really is. No it&#39;s not better at web search than Google (yet), and it&#39;s nowhere near being sentient. But for the tasks where it makes sense, it&#39;s <em>very</em> good. So I&#39;m not going to tell you that you don&#39;t need to worry because it&#39;s a useless tool.</p>

<p>ChatGPT (and other AIs) is a <em>tool</em>, in the same way a calculator is a tool or a compiler is a tool. The word “calculator” used to refer to people doing number crunching. My first calculator was a handheld device. Now it&#39;s just an app. Yes, calculators, the people, lost their “job” doing mind-numbing number crunching, but they were able to work on more interesting problems in math, physics, or what have you.</p>

<p>Same with compilers. When compilers were invented, people were furious that someone thought a machine could do a better job than an expert programmer at crafting assembly code. Today almost nobody writes directly in assembly, except in rare cases where that makes sense. But “assembler programmers” didn&#39;t lose their jobs, they just became “C programmers” or “Lisp programmers”.</p>

<p>It is the same with the new AI models. They are very effective on a whole other level than just number crunching or compiling software, but at the end of the day they&#39;re just tools, like programming languages, libraries or APIs.</p>

<p>If you&#39;ve been a programmer for more than a few years, you know you always need to learn new stuff and stay on top. You&#39;ll need to invest some time to learn what ChatGPT and others can do, or can&#39;t. What you don&#39;t want to do is completely ignore the trend, or (equally bad) think it&#39;ll solve all your problems (or put you out of business). And as John Carmack said in a recent tweet <a href="https://twitter.com/ID_AA_Carmack/status/1637087219591659520" rel="nofollow">keep your eyes on the delivered value and don&#39;t over focus on the specifics of the tools</a>.</p>

<p>I&#39;ve been a programmer for some 30 years (20 or so professionally) and my usual reaction to new stuff is “oh, so they&#39;re reinventing <em>that</em> particular wheel again”. Yet the current crop of AIs gets me really excited, like I was a kid again uncovering the vast potential of what you can do with a machine that you can order around! I&#39;ve been playing with ChatGPT and it makes me faster and it makes programming (more) fun!</p>

<p>Not by writing my code – I don&#39;t use it like that because it does produce bugs and it can hallucinate stuff. I use it to explore (how I might go about doing X), and to quickly recall something I forgot (how a particular API or library function is used, for example). As Simon Willison (of Django and Datasette fame) puts it <a href="https://simonwillison.net/2023/Mar/27/ai-enhanced-development/" rel="nofollow">AI-enhanced development makes me more ambitious with my projects</a>.</p>

<p>This is just a beginning, and we&#39;re just seeing a boom in integrating these AIs into everything else. Copilot, Bing search, are the big names but people are experimenting with integrations with anything under the sun (I made a service that <a href="https://apibakery.com/demo/ai/" rel="nofollow">creates API backends based on your project description</a>, for example). Time will tell which of these will be truly useful, but I have no doubt there will be a lot of them.</p>

<p>AI is a tool, with limitations, but with a lot of potential. It would be a shame not to use it effectively. It won&#39;t put you out of a job, it will make your job better.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/why-you-dont-need-to-worry-about-ai-as-a-programmer</guid>
      <pubDate>Tue, 28 Mar 2023 10:41:23 +0000</pubDate>
    </item>
    <item>
      <title>Relative popularity of programming languages on Hacker News</title>
      <link>https://blog.senko.net/relative-popularity-of-programming-languages-on-hacker-news?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[As a long-time Hacker News reader, I&#39;ve seen trends come and go. One of those trends is popularity of programming languages, manifested through the number of stories referencing a language.&#xA;&#xA;Examples include &#34;Building a Cache in Elixir&#34;, &#34;Helix: Neovim inspired editor, written in Rust&#34; or &#34;Text formatting in C++ using Libc++&#34;.&#xA;&#xA;Since these &#34;in $LANGUAGE&#34; articles come up fairly often, I was curious to see what we can glimpse about programming language popularity just by counting them. This is easy to do using HN Search API provided by Algolia.&#xA;&#xA;Photo by Thomas Tastet @ Unsplash&#xA;&#xA;Methodology&#xA;&#xA;I used it to search for stories in the past year, where the phrase &#34;in $LANGUAGE&#34; appears in the story title. The search included C, C++, C#, Clojure, Dart, Elixir, Erlang, F#, Go, Haskell, Java, JavaScript, Kotlin, Lisp, Lua, PHP, Python, Ruby, Rust, Scala, Scheme, Swift, TypeScript, and Zig languages.&#xA;&#xA;I also included Racket Scheme and added its numbers to Scheme. I&#39;d have included Common Lisp as well, but there were no stories mentioning it explicitly in the title in the past year. Also, since C would match both C++ and C# results in this simple string search, I deduplicated the results to get the correct count.&#xA;&#xA;I ranked the languages in three ways: by the number of stories, number of comments, and sum of points. The results were fairly consistent across all three metrics.&#xA;&#xA;If you&#39;re interested in running this yourself, this is the script I used (you&#39;ll need the requests package to run it). It downloads and caches search results as JSON in /tmp/hn, sorts the languages, and prints the results.&#xA;&#xA;I ran the script on October 12th, 2022.&#xA;&#xA;And the winner is ...&#xA;&#xA;... Rust, by a wide margin. Here are the full results:&#xA;&#xA;By the number of stories:&#xA;&#xA;Rust (573 stories)&#xA;Python (375 stories)&#xA;Go (332 stories)&#xA;JavaScript (200 stories)&#xA;C (143 stories)&#xA;C++ (136 stories)&#xA;TypeScript (79 stories)&#xA;Java (76 stories)&#xA;Ruby (57 stories)&#xA;10. Elixir (49 stories)&#xA;11. Haskell (48 stories)&#xA;12. Swift (41 stories)&#xA;13. C# (32 stories)&#xA;14. PHP (31 stories)&#xA;15. Zig (27 stories)&#xA;16. Clojure (23 stories)&#xA;17. Scheme (13 stories)&#xA;18. F# (12 stories)&#xA;19. Kotlin (9 stories)&#xA;20. Erlang (8 stories)&#xA;21. Lisp (8 stories)&#xA;22. Lua (8 stories)&#xA;23. Dart (4 stories)&#xA;24. Scala (4 stories)&#xA;&#xA;By the number of comments:&#xA;&#xA;Rust (6384 comments)&#xA;Go (3394 comments)&#xA;C (3027 comments)&#xA;Python (2783 comments)&#xA;JavaScript (1583 comments)&#xA;C++ (771 comments)&#xA;Zig (735 comments)&#xA;Java (615 comments)&#xA;Ruby (396 comments)&#xA;10. TypeScript (341 comments)&#xA;11. Haskell (301 comments)&#xA;12. PHP (223 comments)&#xA;13. Scheme (177 comments)&#xA;14. Swift (165 comments)&#xA;15. C# (151 comments)&#xA;16. Elixir (133 comments)&#xA;17. Clojure (82 comments)&#xA;18. Lua (75 comments)&#xA;19. Lisp (53 comments)&#xA;20. F# (45 comments)&#xA;21. Erlang (39 comments)&#xA;22. Kotlin (7 comments)&#xA;23. Dart (1 comments)&#xA;24. Scala (0 comments)&#xA;&#xA;By points:&#xA;&#xA;Rust (16887 points)&#xA;Go (7144 points)&#xA;Python (7048 points)&#xA;C (5422 points)&#xA;JavaScript (3644 points)&#xA;Zig (2427 points)&#xA;C++ (1846 points)&#xA;Java (1595 points)&#xA;Ruby (1044 points)&#xA;10. TypeScript (998 points)&#xA;11. Haskell (846 points)&#xA;12. Elixir (751 points)&#xA;13. Scheme (549 points)&#xA;14. Swift (447 points)&#xA;15. PHP (402 points)&#xA;16. Clojure (365 points)&#xA;17. C# (334 points)&#xA;18. F# (331 points)&#xA;19. Erlang (322 points)&#xA;20. Lua (238 points)&#xA;21. Lisp (154 points)&#xA;22. Kotlin (34 points)&#xA;23. Dart (14 points)&#xA;24. Scala (7 points)&#xA;&#xA;Hacker News loves Rust&#xA;&#xA;Go, Python and JavaScript are quite popular, but Rust absolutely blows the competition out of the water.  It had almost as much stories and comments as the languages in second and third place combined, and was awarded even more points.&#xA;&#xA;If you&#39;re a regular HN reader, this isn&#39;t surprising. As a modern language with passionate community, Rust is invariably mentioned in most conversations about languages in general, and stories about using Rust for a new project or in a new are always exciting.&#xA;&#xA;And I&#39;m not even counting the &#34;Rust in the Linux kernel&#34; theme, which alone got around 600 points across various stories in the past year.&#xA;&#xA;The new kid on the block is ... Zig&#xA;&#xA;What did surprise me is the relative popularity of Zig, a fairly young language with still a relatively small comunities - I&#39;ll be sure to keep an eye on it.&#xA;&#xA;--&#xA;&#xA;(comment on Hacker News) | (follow me on Twitter)&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>As a long-time <a href="https://news.ycombinator.com" rel="nofollow">Hacker News</a> reader, I&#39;ve seen trends come and go. One of those trends is popularity of programming languages, manifested through the number of stories referencing a language.</p>

<p>Examples include “<a href="https://news.ycombinator.com/item?id=32054532" rel="nofollow">Building a Cache in Elixir</a>”, “<a href="https://news.ycombinator.com/item?id=33147270" rel="nofollow">Helix: Neovim inspired editor, written in Rust</a>” or “<a href="https://news.ycombinator.com/item?id=33004803" rel="nofollow">Text formatting in C++ using Libc++</a>”.</p>

<p>Since these “<em>in $LANGUAGE</em>” articles come up fairly often, I was curious to see what we can glimpse about programming language popularity just by counting them. This is easy to do using <a href="https://hn.algolia.com/api" rel="nofollow">HN Search API</a> provided by Algolia.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/blog/thomas-tastet-0eqgB57xMeA-unsplash.jpg" alt="Photo by Thomas Tastet @ Unsplash"/></p>

<h2 id="methodology" id="methodology">Methodology</h2>

<p>I used it to search for stories in the past year, where the phrase “<em>in $LANGUAGE</em>” appears in the story title. The search included C, C++, C#, Clojure, Dart, Elixir, Erlang, F#, Go, Haskell, Java, JavaScript, Kotlin, Lisp, Lua, PHP, Python, Ruby, Rust, Scala, Scheme, Swift, TypeScript, and Zig languages.</p>

<p>I also included Racket Scheme and added its numbers to Scheme. I&#39;d have included Common Lisp as well, but there were no stories mentioning it explicitly in the title in the past year. Also, since C would match both C++ and C# results in this simple string search, I deduplicated the results to get the correct count.</p>

<p>I ranked the languages in three ways: by the number of stories, number of comments, and sum of points. The results were fairly consistent across all three metrics.</p>

<p>If you&#39;re interested in running this yourself, <a href="https://gist.github.com/senko/b031f61f61d89f96e165659d3f022784" rel="nofollow">this is the script I used</a> (you&#39;ll need the <code>requests</code> package to run it). It downloads and caches search results as JSON in <code>/tmp/hn</code>, sorts the languages, and prints the results.</p>

<p>I ran the script on October 12th, 2022.</p>

<h2 id="and-the-winner-is" id="and-the-winner-is">And the winner is ...</h2>

<p>... <a href="https://www.rust-lang.org/" rel="nofollow">Rust</a>, by a wide margin. Here are the full results:</p>

<p>By the number of stories:</p>
<ol><li>Rust (573 stories)</li>
<li>Python (375 stories)</li>
<li>Go (332 stories)</li>
<li>JavaScript (200 stories)</li>
<li>C (143 stories)</li>
<li>C++ (136 stories)</li>
<li>TypeScript (79 stories)</li>
<li>Java (76 stories)</li>
<li>Ruby (57 stories)</li>
<li>Elixir (49 stories)</li>
<li>Haskell (48 stories)</li>
<li>Swift (41 stories)</li>
<li>C# (32 stories)</li>
<li>PHP (31 stories)</li>
<li>Zig (27 stories)</li>
<li>Clojure (23 stories)</li>
<li>Scheme (13 stories)</li>
<li>F# (12 stories)</li>
<li>Kotlin (9 stories)</li>
<li>Erlang (8 stories)</li>
<li>Lisp (8 stories)</li>
<li>Lua (8 stories)</li>
<li>Dart (4 stories)</li>
<li>Scala (4 stories)</li></ol>

<p>By the number of comments:</p>
<ol><li>Rust (6384 comments)</li>
<li>Go (3394 comments)</li>
<li>C (3027 comments)</li>
<li>Python (2783 comments)</li>
<li>JavaScript (1583 comments)</li>
<li>C++ (771 comments)</li>
<li>Zig (735 comments)</li>
<li>Java (615 comments)</li>
<li>Ruby (396 comments)</li>
<li>TypeScript (341 comments)</li>
<li>Haskell (301 comments)</li>
<li>PHP (223 comments)</li>
<li>Scheme (177 comments)</li>
<li>Swift (165 comments)</li>
<li>C# (151 comments)</li>
<li>Elixir (133 comments)</li>
<li>Clojure (82 comments)</li>
<li>Lua (75 comments)</li>
<li>Lisp (53 comments)</li>
<li>F# (45 comments)</li>
<li>Erlang (39 comments)</li>
<li>Kotlin (7 comments)</li>
<li>Dart (1 comments)</li>
<li>Scala (0 comments)</li></ol>

<p>By points:</p>
<ol><li>Rust (16887 points)</li>
<li>Go (7144 points)</li>
<li>Python (7048 points)</li>
<li>C (5422 points)</li>
<li>JavaScript (3644 points)</li>
<li>Zig (2427 points)</li>
<li>C++ (1846 points)</li>
<li>Java (1595 points)</li>
<li>Ruby (1044 points)</li>
<li>TypeScript (998 points)</li>
<li>Haskell (846 points)</li>
<li>Elixir (751 points)</li>
<li>Scheme (549 points)</li>
<li>Swift (447 points)</li>
<li>PHP (402 points)</li>
<li>Clojure (365 points)</li>
<li>C# (334 points)</li>
<li>F# (331 points)</li>
<li>Erlang (322 points)</li>
<li>Lua (238 points)</li>
<li>Lisp (154 points)</li>
<li>Kotlin (34 points)</li>
<li>Dart (14 points)</li>
<li>Scala (7 points)</li></ol>

<h2 id="hacker-news-loves-rust" id="hacker-news-loves-rust">Hacker News loves Rust</h2>

<p>Go, Python and JavaScript are quite popular, but Rust absolutely blows the competition out of the water.  It had almost as much stories and comments as the languages in second and third place <em>combined</em>, and was awarded even more points.</p>

<p>If you&#39;re a regular HN reader, this isn&#39;t surprising. As a modern language with passionate community, Rust is invariably mentioned in most conversations about languages in general, and stories about using Rust for a new project or in a new are always exciting.</p>

<p>And I&#39;m not even counting the “<a href="https://news.ycombinator.com/item?id=29485465" rel="nofollow">Rust in the Linux kernel</a>” theme, which alone got around 600 points across various stories in the past year.</p>

<h2 id="the-new-kid-on-the-block-is-zig" id="the-new-kid-on-the-block-is-zig">The new kid on the block is ... Zig</h2>

<p>What did surprise me is the relative popularity of <a href="https://ziglang.org/" rel="nofollow">Zig</a>, a fairly young language with still a relatively small comunities – I&#39;ll be sure to keep an eye on it.</p>

<p>—</p>

<p>(<a href="https://news.ycombinator.com/item?id=33190148" rel="nofollow">comment on Hacker News</a>) | (<a href="https://twitter.com/senkorasic" rel="nofollow">follow me on Twitter</a>)</p>
]]></content:encoded>
      <guid>https://blog.senko.net/relative-popularity-of-programming-languages-on-hacker-news</guid>
      <pubDate>Thu, 13 Oct 2022 12:22:59 +0000</pubDate>
    </item>
    <item>
      <title>The future of hybrid and remote work</title>
      <link>https://blog.senko.net/hybrid-and-remote-work?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[As a software developer with many years of remote and in-office experience, I am excited by the current shift that&#39;s making remote and hybrid work commonplace.&#xA;&#xA;The pandemic was a global tragedy and a big disruptor to every aspect of the lives of people everywhere. One of the few silver linings was that it forced a rethinking of the importance of location for the knowledge worker and what &#34;office&#34; actually means.&#xA;&#xA;In my two decades of professional experience, I spent about half of it working remotely. The other half we had a small office (10 to 15 people), but mostly worked with clients overseas. In my time as both an individual contributor, and manager of dev teams, I had to handle the challenges arising from, and enjoy the advantages of, remote work.&#xA;&#xA;a digital nomad in its natural habitat&#xA;&#xA;My view is naturally colored by my experience. I am a software engineer and have worked in IT my entire career. While I believe most of what I say here can be generalized, there are nuances for every profession.&#xA;&#xA;Location, location, location&#xA;&#xA;Let&#39;s get obvious out of the way. Not all work is - or will be - possible to do remotely.&#xA;&#xA;Although technology will continue to push the limits (remote truck driving, anyone?), many jobs, from surgeons to shopkeepers to street sweepers, still require people to be in a certain location to do their job. This does not mean those jobs are lesser, or that they will lose out in the &#34;remote work revolution&#34; - quite the opposite. Fewer people commuting to work and crowding in hot locations means less traffic, less stress and more room for those that choose to - or have to - be there.&#xA;&#xA;Bare necessities&#xA;&#xA;My job as a software developer consists mostly of reading from a computer screen and typing on a keyboard, with the occasional use of a mouse. I can do that anywhere on the planet, provided there is electricity and an adequate internet connection. In reality, what I consume (as a work input) and produce (work output) is information, and that&#39;s easily sent across the world.&#xA;&#xA;The same holds mostly true for most knowledge workers. When push came to shove (the pandemic lockdowns), a lot of people realized the work they do can be performed remotely.&#xA;&#xA;Hierarchy of needs&#xA;&#xA;Of course, that&#39;s not all there is. We still need information, often implicit, from our coworkers, about the problem we&#39;re solving, and the wider context our company, customers, and clients are operating on. Much of this is recorded poorly, or not at all. Remote interactions are harder than face-to-face and Zoom fatigue is real. And sometimes we need specialized office or lab equipment.&#xA;&#xA;And as much as we love working in our pajamas, we might miss office chit-chat or getting that energized feeling from being a member of a well-oiled team of awesome people doing great work together.&#xA;&#xA;Back to life, back to reality, back to the here and now&#xA;&#xA;Now that things are normalizing a bit (I know it&#39;s a stretch to use that word in the middle of 2022), companies are starting to mandate back-to-office or hybrid work. Conspiracy theories about reasons are many, from wanting to micromanage workers better, to making sure expensive office leases are utilized, to being a sneaky way to lay people off without actually giving them the pink slip.&#xA;&#xA;However this is resolved, the genie is out of the bottle. Remote work has previously been regarded as a unique perk or very specific work arrangement. Now, wherever it&#39;s allowed or not in a particular company, it&#39;s normal. I think that&#39;s a good thing.&#xA;&#xA;Checking our privilege&#xA;&#xA;Us IT folks have it easy. The job market is hot and there are plenty of opportunities for good engineers. Yes, there are hiring freezes and some companies are laying off people. In most of the cases I heard of, people were quickly snatched by the competitors.&#xA;&#xA;This puts us in a position to demand the ability to work remotely, and if needed, to quit and join a company that will allow that. Not everyone is so lucky and there are legions of knowledge workers who have returned (or will have to return) to the office just because the bosses decreed it.&#xA;&#xA;In IT, and especially in software development, the shift is real, and companies need to adapt.&#xA;&#xA;In-office vs remote work&#xA;&#xA;There is a fundamental divide between in-office and remote work, and that is how the information within the company flows. Companies where in-office work is the norm can get away with much larger implicit context and rely on employees communicating directly ad-hoc, as needed.&#xA;&#xA;To take full advantage, such companies maximize the overlap between the employees&#39; working hours (9 to 5 everyone!), and try to cluster employees together (if not all the company can be in the same location, at least the teams working together) and design office space to optimize for interaction (leading to unfortunate things like open offices).&#xA;&#xA;Remote-first companies focus on async work (work on your own time, communicate asynchronously with coworkers) making it possible to hire globally. Instead of spending on office space, they give workers budgets to improve their workspace or equipment. But most of all, remote-first companies take communication at all levels seriously. From office banter to all-hands meetings, stuff is happening online and is often saved or recorded for the benefit of people who are not online at that very moment.&#xA;&#xA;It&#39;s easy to contrast these two extremes, but many companies will probably lie somewhere in between.&#xA;&#xA;Old-style hybrid work&#xA;&#xA;&#34;Hybrid&#34; is a word that describes two different work arrangements, depending on where the company started from.&#xA;&#xA;For traditional office-based companies that are introducing hybrid work, this usually amounts to a perk allowing employees to work from home a few days per week. Often, at least two days are fixed for meetings and any other work that must be performed on-site. This type of hybrid office can easily slip back into the &#34;office is king&#34; mentality, where the important things still happen in the office, not everything is shared with remote workers (not necessarily due to any malicious intent), and the more you&#39;re present, the faster you&#39;ll career trajectory will be.&#xA;&#xA;This was mostly the status quo for remote workers before the pandemic. Except in rare enlightened companies, remote workers were de-facto second-class citizens. The more a company now stresses that being in-office is important, the more likely it&#39;ll fall back on this pre-pandemic default.&#xA;&#xA;I believe that companies that are now requiring employees to be present at least three days a week will most likely fall back to this mentality.&#xA;&#xA;If they combine this with hot-desking, where employees must find themselves a (possibly different) desk each day, it will only make things worse. I fear this may be a real possibility for many, to try to have the workers back but still save on office space.&#xA;&#xA;Remote-first hybrid work&#xA;&#xA;Another type of hybrid-location company is remote-first which recognizes there is benefit in getting people together or making it easy for people who want to work from the office to do so. These companies understand office is just another location. Hopefully, the environment is optimized for work (quiet environment, good equipment, etc) but in terms of work communication, it holds no special place.&#xA;&#xA;A company I currently work with has such an approach. They&#39;re NYC-based, but have employees throughout the US, as well as people in Latin America and Europe. The main communication channels are all digital - Slack, Zoom, Notion, and email.  Scheduling Zoom meetings across time zones is a bit challenging, but they manage and are looking to improve as they grow.&#xA;&#xA;From remote-as-perk to remote-first hybrid&#xA;&#xA;There&#39;s a tremendous opportunity for companies that are willing to switch from the office-based (or remote-as-perk a few days per week) model to the remote-first hybrid model. They can still make use of their existing office space (possibly downsized to reap financial benefits), reconfiguring it so it is support for people who want to work from the office, or who need to come to the office. &#xA;&#xA;Essentially, they can turn their offices into a sort of internal coworking space. Employees that don&#39;t want or can&#39;t work from home can still come to the office. Coworkers or entire teams can independently or ad-hoc agree to come to the office for a day or a week for brainstorming sessions, project kick-offs, or just to socialize from time to time.&#xA;&#xA;Remote-first, but not remote-always&#xA;&#xA;In a few places I worked as a remote developer,  companies had policies to bring everyone on the team together for a few days. This was sometimes combined with everyone going to a conference but hanging out in the off-times, or for everyone to be in the same place to start a new initiative. This was always a fun experience (even when paired with hard work) and I felt energized and ready to take on new challenges.&#xA;&#xA;I believe fully remote companies should try to bring their people together at least once or twice per year. This can be company-wide (for smaller organizations) or per-team (easier to do with larger companies). If the company is fully remote and doesn&#39;t have an HQ, doesn&#39;t matter: pick a nice location and if there&#39;s a conference or another interesting event, that&#39;s even better. &#xA;&#xA;Remote vs overseas&#xA;&#xA;A lot of companies these days say they hire remotely but within a country. For example, a job listing may say &#34;remote, US-only&#34;.&#xA;&#xA;This is understandable: it&#39;s legally easier to hire within the country you&#39;re incorporated in, the cultural differences are smaller, and time zone difference is more manageable. it can also be harder to find remote talent since you don&#39;t know where or how to look. The stigma of hiring overseas developers as &#34;outsourcing to cheap labor&#34; doesn&#39;t help.&#xA;&#xA;Cultural differences are real and should not be ignored. Being cognizant of the different ways people communicate (or avoid difficult subjects) is important. I&#39;ve found that (respectfully) overcommunicating helps, as it reduces the potential for confusion.  Time zone differences can have a larger or smaller impact, depending on whether your team&#39;s communication is more synchronous or async. If you have daily Zoom meetings and your team is spread over Australia, Europe, and the US, you&#39;ll have a lot of unhappy people. Somewhat localizing the teams, and making most communication asynchronous help.&#xA;&#xA;Remote doesn&#39;t mean cheap labor&#xA;&#xA;If you pay peanuts, you&#39;ll get (code) monkeys. Good developers anywhere know what they&#39;re worth. That said, the differences in cost of living are huge, and people do take into account the (in)convenience of living anywhere. $100K means a very different lifestyle to someone in San Francisco than to someone in Vienna or Manila.&#xA;&#xA;Developers are ambivalent toward location-based salaries - on the one hand, we&#39;d prefer to be paid what we&#39;re worth (based on the value we provide), not based on where we live. On the other hand, we don&#39;t accept that there are talented and skilled people that can provide the same value for a slightly (or more than slightly) lower price, in lower-income countries.&#xA;&#xA;If companies were completely oblivious to someone&#39;s location, it would be natural for them to want to pay a lower price for the same value, so they&#39;d hire in lower-income locations (within a country or overseas). But they&#39;re not. They do prefer local (or nearby) employees, and they are willing to pay a higher price for that. A direct consequence of this is that at least some portion of the salary is going to be location-based.&#xA;&#xA;As knowledge workers, we should get to terms with that fact. We are paid based on the value we provide, but not in isolation, and the job market will always have an impact.&#xA;&#xA;The future is already here - it&#39;s just not evenly distributed&#xA;&#xA;The pandemic turned remote work from a fringe benefit to a normal, accepted, way to do your job, in cases, companies and jobs where that makes sense. The technology will keep pushing the boundaries where that&#39;s possible, and the knowledge workers will keep pushing to have that option.&#xA;&#xA;Companies will have to adapt: by doubling down on office (and paying extra for it), embracing the new opportunity to broaden the talent pool, or finding a hybrid sweet spot that makes sense for both them and their employees.&#xA;&#xA;There&#39;s no going back.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>As a software developer with many years of remote and in-office experience, I am excited by the current shift that&#39;s making remote and hybrid work commonplace.</p>

<p>The pandemic was a global tragedy and a big disruptor to every aspect of the lives of people everywhere. One of the few silver linings was that it forced a rethinking of the importance of location for the knowledge worker and what “office” actually means.</p>

<p>In my two decades of professional experience, I spent about half of it working remotely. The other half we had a small office (10 to 15 people), but mostly worked with clients overseas. In my time as both an individual contributor, and manager of dev teams, I had to handle the challenges arising from, and enjoy the advantages of, remote work.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/blog/jefferson-santos-yeSFK3x3p7M-unsplash.jpg" alt="a digital nomad in its natural habitat"/></p>

<p>My view is naturally colored by my experience. I am a software engineer and have worked in IT my entire career. While I believe most of what I say here can be generalized, there are nuances for every profession.</p>

<h2 id="location-location-location" id="location-location-location">Location, location, location</h2>

<p>Let&#39;s get obvious out of the way. Not all work is – or will be – possible to do remotely.</p>

<p>Although technology will continue to push the limits (remote truck driving, anyone?), many jobs, from surgeons to shopkeepers to street sweepers, still require people to be in a certain location to do their job. This does not mean those jobs are lesser, or that they will lose out in the “remote work revolution” – quite the opposite. Fewer people commuting to work and crowding in hot locations means less traffic, less stress and more room for those that choose to – or have to – be there.</p>

<h2 id="bare-necessities" id="bare-necessities">Bare necessities</h2>

<p>My job as a software developer consists mostly of reading from a computer screen and typing on a keyboard, with the occasional use of a mouse. I can do that anywhere on the planet, provided there is electricity and an adequate internet connection. In reality, what I consume (as a work input) and produce (work output) is information, and that&#39;s easily sent across the world.</p>

<p>The same holds mostly true for most knowledge workers. When push came to shove (the pandemic lockdowns), a lot of people realized the work they do can be performed remotely.</p>

<h2 id="hierarchy-of-needs" id="hierarchy-of-needs">Hierarchy of needs</h2>

<p>Of course, that&#39;s not all there is. We still need information, often implicit, from our coworkers, about the problem we&#39;re solving, and the wider context our company, customers, and clients are operating on. Much of this is recorded poorly, or not at all. Remote interactions are harder than face-to-face and Zoom fatigue is real. And sometimes we need specialized office or lab equipment.</p>

<p>And as much as we love working in our pajamas, we might miss office chit-chat or getting that energized feeling from being a member of a well-oiled team of awesome people doing great work together.</p>

<h2 id="back-to-life-back-to-reality-back-to-the-here-and-now" id="back-to-life-back-to-reality-back-to-the-here-and-now">Back to life, back to reality, back to the here and now</h2>

<p>Now that things are normalizing a bit (I know it&#39;s a stretch to use that word in the middle of 2022), companies are starting to mandate back-to-office or hybrid work. Conspiracy theories about reasons are many, from wanting to micromanage workers better, to making sure expensive office leases are utilized, to being a sneaky way to lay people off without actually giving them the pink slip.</p>

<p>However this is resolved, the genie is out of the bottle. Remote work has previously been regarded as a unique perk or very specific work arrangement. Now, wherever it&#39;s allowed or not in a particular company, it&#39;s normal. I think that&#39;s a good thing.</p>

<h2 id="checking-our-privilege" id="checking-our-privilege">Checking our privilege</h2>

<p>Us IT folks have it easy. The job market is hot and there are plenty of opportunities for good engineers. Yes, there are hiring freezes and some companies are laying off people. In most of the cases I heard of, people were quickly snatched by the competitors.</p>

<p>This puts us in a position to demand the ability to work remotely, and if needed, to quit and join a company that will allow that. Not everyone is so lucky and there are legions of knowledge workers who have returned (or will have to return) to the office just because the bosses decreed it.</p>

<p>In IT, and especially in software development, the shift is real, and companies need to adapt.</p>

<h2 id="in-office-vs-remote-work" id="in-office-vs-remote-work">In-office vs remote work</h2>

<p>There is a fundamental divide between in-office and remote work, and that is how the information within the company flows. Companies where in-office work is the norm can get away with much larger implicit context and rely on employees communicating directly ad-hoc, as needed.</p>

<p>To take full advantage, such companies maximize the overlap between the employees&#39; working hours (9 to 5 everyone!), and try to cluster employees together (if not all the company can be in the same location, at least the teams working together) and design office space to optimize for interaction (leading to unfortunate things like open offices).</p>

<p>Remote-first companies focus on async work (work on your own time, communicate asynchronously with coworkers) making it possible to hire globally. Instead of spending on office space, they give workers budgets to improve their workspace or equipment. But most of all, remote-first companies take communication at all levels seriously. From office banter to all-hands meetings, stuff is happening online and is often saved or recorded for the benefit of people who are not online at that very moment.</p>

<p>It&#39;s easy to contrast these two extremes, but many companies will probably lie somewhere in between.</p>

<h2 id="old-style-hybrid-work" id="old-style-hybrid-work">Old-style hybrid work</h2>

<p>“Hybrid” is a word that describes two different work arrangements, depending on where the company started from.</p>

<p>For traditional office-based companies that are introducing hybrid work, this usually amounts to a perk allowing employees to work from home a few days per week. Often, at least two days are fixed for meetings and any other work that must be performed on-site. This type of hybrid office can easily slip back into the “office is king” mentality, where the important things still happen in the office, not everything is shared with remote workers (not necessarily due to any malicious intent), and the more you&#39;re present, the faster you&#39;ll career trajectory will be.</p>

<p>This was mostly the status quo for remote workers before the pandemic. Except in rare enlightened companies, remote workers were de-facto second-class citizens. The more a company now stresses that being in-office is important, the more likely it&#39;ll fall back on this pre-pandemic default.</p>

<p>I believe that companies that are now requiring employees to be present at least three days a week will most likely fall back to this mentality.</p>

<p>If they combine this with hot-desking, where employees must find themselves a (possibly different) desk each day, it will only make things worse. I fear this may be a real possibility for many, to try to have the workers back but still save on office space.</p>

<h2 id="remote-first-hybrid-work" id="remote-first-hybrid-work">Remote-first hybrid work</h2>

<p>Another type of hybrid-location company is remote-first which recognizes there is benefit in getting people together or making it easy for people who want to work from the office to do so. These companies understand office is just another location. Hopefully, the environment is optimized for work (quiet environment, good equipment, etc) but in terms of work communication, it holds no special place.</p>

<p>A company I currently work with has such an approach. They&#39;re NYC-based, but have employees throughout the US, as well as people in Latin America and Europe. The main communication channels are all digital – Slack, Zoom, Notion, and email.  Scheduling Zoom meetings across time zones is a bit challenging, but they manage and are looking to improve as they grow.</p>

<h2 id="from-remote-as-perk-to-remote-first-hybrid" id="from-remote-as-perk-to-remote-first-hybrid">From remote-as-perk to remote-first hybrid</h2>

<p>There&#39;s a tremendous opportunity for companies that are willing to switch from the office-based (or remote-as-perk a few days per week) model to the remote-first hybrid model. They can still make use of their existing office space (possibly downsized to reap financial benefits), reconfiguring it so it is support for people who want to work from the office, or who need to come to the office.</p>

<p>Essentially, they can turn their offices into a sort of internal coworking space. Employees that don&#39;t want or can&#39;t work from home can still come to the office. Coworkers or entire teams can independently or ad-hoc agree to come to the office for a day or a week for brainstorming sessions, project kick-offs, or just to socialize from time to time.</p>

<h2 id="remote-first-but-not-remote-always" id="remote-first-but-not-remote-always">Remote-first, but not remote-always</h2>

<p>In a few places I worked as a remote developer,  companies had policies to bring everyone on the team together for a few days. This was sometimes combined with everyone going to a conference but hanging out in the off-times, or for everyone to be in the same place to start a new initiative. This was always a fun experience (even when paired with hard work) and I felt energized and ready to take on new challenges.</p>

<p>I believe fully remote companies should try to bring their people together at least once or twice per year. This can be company-wide (for smaller organizations) or per-team (easier to do with larger companies). If the company is fully remote and doesn&#39;t have an HQ, doesn&#39;t matter: pick a nice location and if there&#39;s a conference or another interesting event, that&#39;s even better.</p>

<h2 id="remote-vs-overseas" id="remote-vs-overseas">Remote vs overseas</h2>

<p>A lot of companies these days say they hire remotely but within a country. For example, a job listing may say “remote, US-only”.</p>

<p>This is understandable: it&#39;s legally easier to hire within the country you&#39;re incorporated in, the cultural differences are smaller, and time zone difference is more manageable. it can also be harder to find remote talent since you don&#39;t know where or how to look. The stigma of hiring overseas developers as “outsourcing to cheap labor” doesn&#39;t help.</p>

<p>Cultural differences are real and should not be ignored. Being cognizant of the different ways people communicate (or avoid difficult subjects) is important. I&#39;ve found that (respectfully) overcommunicating helps, as it reduces the potential for confusion.  Time zone differences can have a larger or smaller impact, depending on whether your team&#39;s communication is more synchronous or async. If you have daily Zoom meetings and your team is spread over Australia, Europe, and the US, you&#39;ll have a lot of unhappy people. Somewhat localizing the teams, and making most communication asynchronous help.</p>

<h2 id="remote-doesn-t-mean-cheap-labor" id="remote-doesn-t-mean-cheap-labor">Remote doesn&#39;t mean cheap labor</h2>

<p>If you pay peanuts, you&#39;ll get (code) monkeys. Good developers anywhere know what they&#39;re worth. That said, the differences in cost of living are huge, and people do take into account the (in)convenience of living anywhere. $100K means a very different lifestyle to someone in San Francisco than to someone in Vienna or Manila.</p>

<p>Developers are ambivalent toward location-based salaries – on the one hand, we&#39;d prefer to be paid what we&#39;re worth (based on the value we provide), not based on where we live. On the other hand, we don&#39;t accept that there are talented and skilled people that can provide the same value for a slightly (or more than slightly) lower price, in lower-income countries.</p>

<p>If companies were completely oblivious to someone&#39;s location, it would be natural for them to want to pay a lower price for the same value, so they&#39;d hire in lower-income locations (within a country or overseas). But they&#39;re not. They do prefer local (or nearby) employees, and they are willing to pay a higher price for that. A direct consequence of this is that at least some portion of the salary is going to be location-based.</p>

<p>As knowledge workers, we should get to terms with that fact. We are paid based on the value we provide, but not in isolation, and the job market will always have an impact.</p>

<h2 id="the-future-is-already-here-it-s-just-not-evenly-distributed" id="the-future-is-already-here-it-s-just-not-evenly-distributed">The future is already here – it&#39;s just not evenly distributed</h2>

<p>The pandemic turned remote work from a fringe benefit to a normal, accepted, way to do your job, in cases, companies and jobs where that makes sense. The technology will keep pushing the boundaries where that&#39;s possible, and the knowledge workers will keep pushing to have that option.</p>

<p>Companies will have to adapt: by doubling down on office (and paying extra for it), embracing the new opportunity to broaden the talent pool, or finding a hybrid sweet spot that makes sense for both them and their employees.</p>

<p>There&#39;s no going back.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/hybrid-and-remote-work</guid>
      <pubDate>Tue, 23 Aug 2022 22:29:06 +0000</pubDate>
    </item>
    <item>
      <title>The story of A Web Whiteboard</title>
      <link>https://blog.senko.net/the-story-of-a-web-whiteboard?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Last month, after successful transition of our users to Miro, I shut down the public servers of A Web Whiteboard (AWW App), marking the end chapter of a decade long story. &#xA;&#xA;Throughout, millions of users have collectively spent a millennia drawing, learning, collaborating, and just having fun with AWW. It&#39;s been used in schools, FAANGs and everywhere in between. Early in 2021 we joined forces with Miro, the world&#39;s leading online whiteboard platform.&#xA;&#xA;The start of the story is much humbler. I never intended or even imagined it would be a success it has been.&#xA;&#xA;Origins&#xA;&#xA;Ten years ago, back in 2011, netbooks were all the craze. What started some six years earlier with EeePC turned into an entire product category with Cambrian explosion of options. I got my hands on one with a curious feature - a touchscreen. At the time, system UI wasn&#39;t at all suited for clumsy fingers instead of precision controlled mouse pointer.&#xA;&#xA;So there I was, back in 2011, trying to find a simple, nice, drawing program for Linux that would work well, UI-wise, with touchscreens. I didn&#39;t find any so decided to write my own. I took this as an opportunity to learn HTML canvas and write it as a web app.&#xA;&#xA;It was all pretty rudimentary. You had a pencil in a few colors, eraser, and ability to clean the entire canvas. This first version even lacked the ability to save the result - you could always screenshot the page if you wanted to save.&#xA;&#xA;This wasn&#39;t even a side project at the time - just a fun experiment. For kicks, I decided to put it up online and, after thinking for a whole two seconds, named it A Web Whiteboard. Since aww.com was taken, I used awwapp.com for the domain (nameapp.com was a popular way of getting around the domain name squatters at the time).&#xA;&#xA;Sharing&#xA;&#xA;A few months in, someone mentioned it&#39;d be cool if you could actually have multiple people drawing at the same time. I took the suggestion an opportunity to again dabble in new tech. At the time, websockets were just being entering mainstream use and Node was starting its meteoric rise, so I selected it for the synchronization backend and now all-but-forgotten RPC-over-sockets library called nowjs. Being primarily a Python developer, I kept to at least something familiar, using Flask for saving the board contents.&#xA;&#xA;This is how it looked like:&#xA;&#xA;First public version&#xA;&#xA;I spread the word around, got some mention in the press and received positive feedback, including on Hacker News and Reddit. There weren&#39;t many users back then (the whole thing comfortably fit in RAM of a rather small VPS), but each time users gathered to draw together, or shared a saved drawing on social media, more people heard about AWW.&#xA;&#xA;After the launch, I maintained AWW but treated it as a side hobby, and for a couple of years most of my focus was on other things.&#xA;&#xA;Embedding&#xA;&#xA;One thing I did from the start is organize AWW as whiteboard widget that could be embedded into other sites, with awwapp.com being just one of the sites that embedded it.&#xA;&#xA;When I started getting requests to let people add it to their sites, it was easy to do. I set up a PayPal subscribe button and wrote up a simple doc on how to embed the widget. When someone subscribed, I&#39;d add them to the accounts &#34;database&#34; (actually a JSON file) and gave them an unique API key to embed with.&#xA;&#xA;To add whiteboard to their sites, my customers would include jQuery and add the following snippet:&#xA;div id=&#34;wrapper&#34; style=&#34;width: 800px; height: 600px;&#34;/div&#xA;script src=&#34;http://static.awwapp.com/plugin/1.0/aww.min.js&#34;/script&#xA;script&#xA;    $(&#39;#wrapper&#39;).awwCanvas({apiKey: YOURAPIKEY});&#xA;/script&#xA;&#xA;After a time, this netted me a few hundred bucks a month, which was nice but nothing I&#39;d base my business on - or so I thought.&#xA;&#xA;I also started getting more and more serious questions about using the whiteboard on awwapp.com for business, tutoring, etc, as well as feature requests. Most of my customers were tutors using it together with Skype to provide online lessons to their students. They often didn&#39;t have a site to embed the whiteboard on, were happy to use awwapp.com version and willing to pay for it.&#xA;&#xA;The rewrite&#xA;&#xA;As I started treating AWW as a viable business in its own right, I started thinking about rewriting it.&#xA;&#xA;The old JSON file approach was getting unwieldy, as was the fact that all active board were being kept in memory, limiting growth. Finally, the old nowjs framework was unmaintained meaning we were  stuck on an extremely old Node version (0.6 if I remember correctly).&#xA;&#xA;I decided on a full rewrite with redesigned architecture and using modern (for the time!) JavaScript features - stuff like promises, the socket.io library and using code bundler tools. For database we selected MongoDB, primarily for ease of use and ease of clustering/failover. We also switched from a single server (and thus single-point-of-failure) to a load-balanced system with full failover (using haproxy for web server load balancing).&#xA;&#xA;This was an explosion in ops complexity. Mostly warranted, but I definitely over-engineered some things, and hit many of the novice mistakes when scaling out.&#xA;&#xA;We started the rewrite in the summer of 2014 and deployed the new version in spring of 2015. I say &#34;we&#34; because I had help - this was now a proper project internal to my dev agency with more people involved. However it still wasn&#39;t a priority and was done at off-times and between client projects.&#xA;&#xA;The rewrite looked similar to the original&#xA;&#xA;With the new version, we were finally able to sign up users directly on awwapp.com, again using PayPal subscriptions. We followed the freemium model, in which most of the functionality was free, but you could pay for some additionally useful features.&#xA;&#xA;Ease of use&#xA;&#xA;Throughout entire lifetime of AWW, I had one major focus, sometimes to the detriment of other aspects of the app: it had to be easy to use. Really easy to use. So easy a child could use it.&#xA;&#xA;We did that successfully - in fact, many children did use it in kindergarten and preschool. It was great seeing pictures teachers shared of kids doing a collaborative class, or exploring maths using our whiteboard. &#34;This is why I&#39;m doing this&#34;, I&#39;d say.&#xA;&#xA;For example, to start drawing on AWW you only had to click one button, labelled Start drawing, and less than a second later, you&#39;re drawing. And the only reason there was some text and a button to click first, was so people coming first time to AWW knew what the site was about! Before this, people coming up on an empty board were sometimes confusing what that was for.&#xA;&#xA;Word of mouth&#xA;&#xA;One thing we didn&#39;t do (except in a small experiment at the start with StumbleUpon) until much later, is promote or advertise AWW. In fact, I did hardly any outreach at all.&#xA;&#xA;Virtually all of the (slow and steady) growth was entirely through word of mouth - our users telling others about us. This was helped by the fact that AWW is a collaboration platform, so users naturally wanted to share the drawing activity or results with others.&#xA;&#xA;Combined with the ease of use and the fact most of the features were free, this made AWW a favorite whiteboard for many teachers and tutors. They were telling each other about it, featuring it in teacher seminars and doing video tutorials on it! This showed me how important and useful AWW was for a growing number of people and really warmed my heart.&#xA;&#xA;Spinning off into a separate startup&#xA;&#xA;Usage slowly but steadily grew over the years, to the point that there was simply too much to do on maintenance, customer support, inbound sales and further development, to treat AWW as a side-project.&#xA;&#xA;I also realized that, as a technical person, I should get new people on board that will have &#34;skin in the game&#34;, a fresh perspective and complementary set of skills such as business development, sales. &#xA;&#xA;I met Zvonimir at local startup event where he was just describing the challenges of his previous startup attempt. Zvonimir had tech chops and biz acumen in equal amount, combined with startup experience, but what I most liked was the levelheadedness when thinking about the realities of building (and selling!) a product in an uncertain environment.&#xA;&#xA;We hit it off immediately and after some time I managed to convince him to join me as a cofounder. I consider this to be the best decision (and best sale!) I made throughout the development on AWW.&#xA;&#xA;We spun off AWW as a separate startup in late 2016, incorporated it in the US using Stripe Atlas, brought several more people on board (a developer, a designer, and a marketing/support person) and set to work. We had three priorities:&#xA;&#xA;The plan&#xA;&#xA;First, to figure out who our users were, why they used us and what we can do to make us more useful to them. This involved identifying different markets we were playing in, such as EdTech, enterprise collaboration, etc. Up until that point we &#34;lucked out&#34; in that our product was useful - now we set to deliberately discover our customers and optimize our approach (feature set and messaging) towards them.&#xA;&#xA;Secondly, to improve our user experience. AWW was simple and easy to use, but not so simple to figure out all the capabilities it had, or indeed why you&#39;d want to pay for it. The more advanced &#34;premium&#34; functionality was hidden away unless you were a premium user, so many of our free users didn&#39;t even realize there was more to it.&#xA;&#xA;The new UI&#xA;&#xA;Also, as our features and supported use cases grew (and were expected to grow much further), bolting them onto the old interface made it more and more clunky over time. We set out to rethink the UI from scratch, still respecting our old imperatives (simplicity) but incorporating the lessons learned.&#xA;&#xA;Third priority was to start working on our marketing, sales and business development, and to improve our customer support.&#xA;&#xA;Up until then, customer support meant me answering email questions with a few days&#39; delay. We aimed to lower the reply time to within a business day and to be active on more channels, such as Twitter and Facebook.&#xA;&#xA;The online presence tied in to marketing. We didn&#39;t do any advertising, aside from a few experiments to test the waters and measure potential interest for various whiteboarding use cases. However, we did want to be present on Twitter, Facebook, LinkedIn, reaching out to blogs reviewing us, talk with teachers recommending us in workshops, and this tied in nicely with the customer relations part of the plan.&#xA;&#xA;The funding&#xA;&#xA;Now we were in business. We had several people and everyone worked exclusively on AWW, except me. I spent half my time on AWW and half managing my software development agency GoodCode.&#xA;&#xA;Even though Croatian salaries are nowhere near US (and especially Silicon Valley) ones, having more people still meant more expenses. What was previously a profitable side project now turned into a startup that wasn&#39;t yet ramen profitable. I was funding the startup from the profits from my agency so we didn&#39;t need to worry about runway for the time being, but were still open to investments.&#xA;&#xA;Our rough plan was to ramp up the product and get to profitability within twelwe months. Of course we hoped for much more - our valiant efforts will result in a hockey-stick growth and back to black in three months and then sky&#39;s the limit!&#xA;&#xA;That, of course, didn&#39;t happen.&#xA;&#xA;YC School&#xA;&#xA;At that time we applied to several accelerators, including YC, but didn&#39;t get in. Though one of the reasons was for funding, our primary reason was the advice, learning experience, networking and fresh ideas.&#xA;&#xA;We also applied to the first YC Startup School in 2017 and did get in! The online MOOC-style course combining online video lectures, weekly check-ins, discussions with mentors and our fellow founders were phenomenal. I imagine the intensity is nowhere near a &#34;full&#34; YC experience, but it was tremendously helpful for us.&#xA;&#xA;YC Startup School Completion Certificate&#xA;&#xA;First, the weekly check-ins did wonders for our accountability. We had metrics before and we did look at what we&#39;ve achieved (or not) previously, but not in as critical and clear way we had to for Startup School. We soon found out our old non-actionable vanity metrics were pretty much worthless and that we didn&#39;t even have an easy way to get to the right numbers. Focusing on the real was like wiping your glasses (or the window) and actually seeing what was going on.&#xA;&#xA;Weekly discussions were as valuable. We received (and gave out) thoughtful and probing questions which made us reflect on our implicit assumptions and ways of thinking. Here&#39;s an example: late in the batch, we were talking about monetization problem since the vast majority of our users were free users. We didn&#39;t want to turn cripple AWW free functionality but couldn&#39;t find a compelling reason for many of them to upgrade. Our mentor asked us point-blank: &#34;I know you&#39;re against putting ads there, but why not give it a shot and see what happens?&#34;&#xA;&#xA;The ads&#xA;&#xA;This wasn&#39;t new idea. This was pretty old idea that I shot down time and again because as a consumer I dislike ads, especially the creepy follow-you-around targeted kind, and I subscribe to the &#34;if you&#39;re not paying for the product, you&#39;re the product&#34; line of thought. I also thought ads would clutter useful screen estate - on a whiteboard, you want the whiteboard area to be as large as possible. So I always dismissed this idea without a second though, and even my cofounder didn&#39;t convince me otherwise.&#xA;&#xA;But this question at the Startup School office hours got us thinking. Can we do it in a way that wouldn&#39;t screw up the experience? Perhaps. We didn&#39;t jump to it straight away but the idea kept simmering in the back of our minds.&#xA;&#xA;Finally, in the spring of 2018, we gave it a go. The idea was simple: free users get ads. Subscribe and besides getting all the premium features, you also get rid of ads. The rationale was that a lot of people don&#39;t actually mind ads for free stuff and for those that do, well, there&#39;s an easy way to remove them. This sounds like such a simple concept but it took us a lot of time to internalize all the rationale and the consequences of this decision.&#xA;&#xA;We implemented Google Adsense in early 2018 and after a few weeks of wildly erratic earnings, it settled down to a pretty low value, with a lot of traffic showing as invalid. To this day I&#39;m not exactly sure what was the reason.&#xA;&#xA;I suspect it was some combination of us testing the ads on only 10% of users to begin with, screwing up the ad placement implementation (there were initial app modal dialogs that were showing above the ads in the first few seconds), and ad rotation (our users were on the same page for extended period of time and showing the same ad for 15 minutes makes no sense).&#xA;&#xA;One by one we fixed these perceived problems. It took very long time for the invalid traffic ratio to lower down (from 50% of ads shown to some 10% of total).&#xA;&#xA;In the end, ads were bringing in about a fifth of our revenue, but they also increased our conversion rate (free to paying users), and haven&#39;t materially dented our growth (users leaving in frustration because of ads was my biggest fear).&#xA;&#xA;Chasm&#xA;&#xA;Late 2018 was a rough time for AWW. About two years of talking to customers, building, measuring, revamping the product and the UI didn&#39;t produce the hockey-stick we were aiming for. Not even the &#34;reasonable&#34; 4x-5x growth we expected at the least. We were growing - but very slowly.&#xA;&#xA;What did we do wrong? Maybe we didn&#39;t focus narrowly enough (we still catered to EdTech, small business users, other apps embedding us and even had an on-premises enterprise offering). Maybe we weren&#39;t aggressive enough in customer acquisition (inbound sales only, no advertising). Me spending just half of my time on the startup certainly didn&#39;t help.&#xA;&#xA;We had some tough decisions to make. After a lot of soul searching we decided to trim down, letting everyone go, with Zvonimir and me staying to work on it part-time. Most of our time we spent on customer relations, inbound sales and maintaining the service.&#xA;&#xA;Without our noticing it, the service still grew, though. In retrospect, we did good work on a number of things back in 2017, it just took a lot more time for the effects to become visible. By the end of 2019 we approached triple the revenue numbers from 2017 (on a slower user growth).&#xA;&#xA;Preparing to sell&#xA;&#xA;Okay so it wasn&#39;t a rocket to the moon we hoped for, but AWW was still a nice little side business. Trouble was, both of us also had other businesses to attend to and couldn&#39;t keep up focus both on the new ventures and AWW indefinitely - something had to go. We also didn&#39;t want to shut it down, so we started thinking about selling it.&#xA;&#xA;We researched and interviewed several online business brokers and decided to go with FE International, partly on the strength of patio11&#39;s recommendations. &#xA;&#xA;Preparations for going to market involved us compiling detailed financials for the previous few years, as well as answering a questionnaire about the business itself, the customers, competition, reasons for selling, and so on.&#xA;&#xA;After we provided this information, FEI prepared a go-to-market prospectus (a sales brochure), we okayed it and they started reaching to their network. This preparation took several months, so by the start of March 2020. we were on the market.&#xA;&#xA;In two weeks, everything changed.&#xA;&#xA;The world moves online&#xA;&#xA;With the pandemic surging everywhere, much of the western world started shutting down or going online. Schools started adapting their curriculum for online, businesses went full work-from-home. Everyone was on Zoom, Teams, and Meet.&#xA;&#xA;Every online productivity and collaboration tool imaginable started seeing their usage numbers surge. This also happened to us. Teachers, school district IT departments, businesses, started signing up in droves.&#xA;&#xA;Our user numbers, customer numbers, number of shared whiteboards, revenue - basically every important metric - skyrocketed. We were up 6x (that&#39;s 600%) by end of April compared to mid-March, and the growth didn&#39;t stop there.&#xA;&#xA;We were profiting from the situation, but also wanted to help. We set up a policy to give free premium access to any educational institution until the end of the school year. Besides being a decent thing to do, we figured (correctly) it would convince many schools to stay with us the next year.&#xA;&#xA;The mad IT scramble &#xA;&#xA;Our systems were woefully unprovisioned for the sudden increase in load. We started provisioning new servers left and right but soon found out there were bottlenecks in how the system was organized that prevented taking full advantage of horizontal scaling.&#xA;&#xA;Examples were having many smaller (unsharded) MongoDB replicas in a set instead of using fewer larger-capacity servers, having database interconnect latencies that caused MongoDB queries to be dispatched unevenly across the servers.&#xA;&#xA;Then there were the sloppy things that went unnoticed for years on a smaller load, such as not cleaning up Redis temporary keys quickly enough (leading to our Redis servers quickly using up any amount of memory we threw at them), &#xA;using a few database queries where a single (complex) one might do the trick, or our indexes not matching the actual usage patterns.&#xA;&#xA;And then there were the subtle bugs and edge cases. As the saying goes, if you have a million users, million-in-one bugs are a daily occurrence. Although AWW is designed to gracefully withstand shorter connection problems, this didn&#39;t always work flawlessly and sometimes even increased the problem by hammering the servers too much when they were already under heavy load.&#xA;&#xA;Our platform providers had their own woes. With everyone buying or renting any capacity they could get their hands on, the platforms themselves become less stable. We were hosted on Digital Ocean and although they&#39;re in general great as a platform (and I have virtually all my other stuff there), we were plagued for months by strange intermittent issues where a system of ours would just lock up under moderate load.&#xA;&#xA;Lots of investigation, together with the engineering team on Digital Ocean&#39;s side, yielded no specific results, except that issues were related to the &#34;shared&#34; type droplets (servers) we were mostly using at the time. Switching to &#34;dedicated&#34; droplets helped a bit, but didn&#39;t eliminate the issue. Whether it was something we did, or there was a just a capacity and noisy neighbor problem, we&#39;ll never know. In the end, we resulted to rebooting any droplets showing signs of being locked up.&#xA;&#xA;Droplet rebooting&#xA;&#xA;Growing the team&#xA;&#xA;Everyone has their &#34;what did you do when the pandemic hit&#34; story. Mine was holed up in the attic-turned-home-office in front of a computer, madly trying to keep the system running, both excited and terrified. Days were a blur - I don&#39;t remember much of anything else until some time at the end of April.&#xA;&#xA;When the pandemic hit, both Zvonimir and me had other work obligations. We couldn&#39;t just drop those at a moment&#39;s notice, so for weeks we put in required hours on those obligations and every other waking moment on AWW.&#xA;&#xA;It still wasn&#39;t enough. Especially the support tickets (combining the questions about our service and signing up, and problem reports) were overwhelming. I remember powering through them late at night, going to sleep, waking up early morning (usually by some server alarm going off) and then seeing another few hours&#39; worth of tickets already waiting.&#xA;&#xA;It took us about a week to decide to grow the team, and then find, hire and onboard a dedicated support person. She was able to handle most of the questions from users while escalating to us stuff like bug reports, bizdev/sales questions and other more complex inquiries.&#xA;&#xA;This finally gave us some breathing room and ability to focus on the business (Zvonimir) and tech (myself) matters. In the coming months we hired another support person, two developers and a part-time designer.&#xA;&#xA;Growing the business&#xA;&#xA;While I was busy scaling up our tech, Zvonimir spent most of his time on Zoom calls, talking to our customers and prospective customers. Whereas previously most of our users and customers were individuals, now organizations of varying sizes were knocking on our door.&#xA;&#xA;We tweaked our bigger pricing plans to better match what people were asking us for. With bigger enterprises now knocking on our door, things like service-level agreements (SLAs) and standard operating procedure (SOP) documentation needed to be set up and addressed before bigger companies could roll us out to their teams.&#xA;&#xA;All of this took enormous amount of time. Deals still took weeks or months from initial contact to implementation. All of it was inbound interest but there was a lot of work for every bigger deal. Had we had a dedicated smooth-running biz team, I&#39;m sure we would have grown twice as fast or more.&#xA;&#xA;No matter. Even with a small team, by end of 2020 we&#39;ve grown 10x, and that includes lull summer months when most of the EdTech activity is subdued.&#xA;&#xA;To sell or not to sell?&#xA;&#xA;Our explosive growth started just as were about to put ourselves on the market. Almost immediately, the old numbers didn&#39;t make sense. Users, revenue, growth numbers were nothing like the ones used to calculate a (target) sale price.&#xA;&#xA;In the next several weeks we did receive several offers that would&#39;ve been interesting before the growth started, but didn&#39;t make sense in the new environment. We decided to put the sale on pause until the numbers became more predictable and for the next few months didn&#39;t think about it.&#xA;&#xA;By mid-summer the growth stabilized - we were still growing, but at a bit more predictable pace - so we revisited the idea to sell, as we still had a (paused) contract with our brokers. &#xA;&#xA;Based on our experience, selling via brokers is easier if there&#39;s a flat or consistent small growth where future can be easily extrapolated. Coming up with the price then involves tallying up income and expenses and multiplying with a factor you come up from experience and rules of thumb.&#xA;&#xA;In our case, growth was triple-digit, virtually all of the expenses were investments into growth so this pricing model breaks down. We adapted the model, updated the numbers and went to market again. Zvonimir had by that time learned the nitty-gritty details of company valuation and had actually provided the final calculation himself.&#xA;&#xA;Ultimately, we haven&#39;t received any credible offers for the updated price and since we weren&#39;t under any pressure to sell, we decided not to.&#xA;&#xA;Miro &#xA;&#xA;Earlier in 2020, Miro had raised $50M in a Series B funding round to fuel their growth. At some point their reached out to us to see if we could cooperate and by the autumn the topic turned to the possibility of them acquiring us.&#xA;&#xA;In contrast to earlier potential buyers, Miro was in the same niche as we were, with a similar product and was also a technological company. We liked the team and their product and felt Miro could be a good home for AWW. Also, the price was right based on our revenue, users and growth numbers.&#xA;&#xA;It took some time to negotiate the details of the deal and go through due dilligence process. We went over dozens of drafts for both LOI (Letter of Intent) and the actual purchase agreement - although we had a &#34;gentleman&#39;s agreement&#34; from the business perspective, there were still a lot of details to iron out.&#xA;&#xA;While we obviously had our own lawyers, tax advisors and accountants (on both Croatian and US side!), we wanted to be hands-on and understand everything, so by the end of it both Zvonimir and me read cumulatively hundreds of pages of US contract legalese. &#xA;&#xA;Due dilligence also got complicated: although AWW, Inc. was a US company, both Zvonimir and me were residents of Croatia and we had contractors from all over the place, not to mention the original intellectual property (IP) traced back to my development agency (Good Code) from the times it was just another project there. Although we had IP purchase paper trail, some of it was in Croatian and had to be translated, and the lawyers wanted to triple-check the rest.&#xA;&#xA;Finally, the deal closed and on February 22nd, 2021. we publicly announced that AWW is joining Miro!&#xA;&#xA;AWW joins Miro&#xA;&#xA;Since both AWW and Miro use custom in-house built whiteboarding functionality, there wasn&#39;t much sense in continuing to develop both products, and Miro decided to wind the technology powering AWW down.&#xA;&#xA;So we set out to migrate existing customers: for enterprise customers there was more work involved on our side, while smaller users received 1 year free equivalent subscription to Miro and guidance on how to export their AWW whiteboards and upload them to Miro. All users were given 5 months to do the transition, and we timed it such that education organizations were able to finish their school year on AWW.&#xA;&#xA;Miro also took our experience and lessons learnt from AWW in designing and developing a free version, so although no AWW code is directly used, our focus on simplicity, immediacy and ease of use continues.&#xA;&#xA;Looking back&#xA;&#xA;If a time-traveler told me ten years ago how the story would unfold, I would laugh them off. What started as a hobby experiment evolved into one of the global leaders in its niche, was used by millions and changed our lives forever.&#xA;&#xA;I like to think that in a tiny way we improved the lives of all our users - be it a kid doodling something in a collaborative drawing class, a roofing company making it easier for their customers to sketch what they need, or a developer being able to discuss software architecture on a confcall - we made those moments easier and more fun.&#xA;&#xA;And hey, what more can you ask for.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Last month, after successful transition of our users to <a href="https://miro.com/" rel="nofollow">Miro</a>, I shut down the public servers of A Web Whiteboard (AWW App), marking the end chapter of a decade long story.</p>

<p>Throughout, millions of users have collectively spent a millennia drawing, learning, collaborating, and just having fun with AWW. It&#39;s been used in schools, FAANGs and everywhere in between. Early in 2021 we joined forces with <a href="https://miro.com/" rel="nofollow">Miro</a>, the world&#39;s leading online whiteboard platform.</p>

<p>The start of the story is much humbler. I never intended or even imagined it would be a success it has been.</p>

<h3 id="origins" id="origins">Origins</h3>

<p>Ten years ago, back in 2011, netbooks were all the craze. What started some six years earlier with EeePC turned into an entire product category with Cambrian explosion of options. I got my hands on one with a curious feature – a touchscreen. At the time, system UI wasn&#39;t at all suited for clumsy fingers instead of precision controlled mouse pointer.</p>

<p>So there I was, back in 2011, trying to find a simple, nice, drawing program for Linux that would work well, UI-wise, with touchscreens. I didn&#39;t find any so decided to write my own. I took this as an opportunity to learn HTML <code>&lt;canvas&gt;</code> and write it as a web app.</p>

<p>It was all pretty rudimentary. You had a pencil in a few colors, eraser, and ability to clean the entire canvas. This first version even lacked the ability to save the result – you could always screenshot the page if you wanted to save.</p>

<p>This wasn&#39;t even a side project at the time – just a fun experiment. For kicks, I decided to put it up online and, after thinking for a whole two seconds, named it <em>A Web Whiteboard</em>. Since <em>aww.com</em> was taken, I used <em>awwapp.com</em> for the domain (<em>name</em>app.com was a popular way of getting around the domain name squatters at the time).</p>

<h3 id="sharing" id="sharing">Sharing</h3>

<p>A few months in, someone mentioned it&#39;d be cool if you could actually have multiple people drawing at the same time. I took the suggestion an opportunity to again dabble in new tech. At the time, websockets were just being entering mainstream use and Node was starting its meteoric rise, so I selected it for the synchronization backend and now all-but-forgotten RPC-over-sockets library called <a href="https://github.com/flotype/now" rel="nofollow">nowjs</a>. Being primarily a Python developer, I kept to at least something familiar, using Flask for saving the board contents.</p>

<p>This is how it looked like:</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/awwapp-original.png" alt="First public version"/></p>

<p>I spread the word around, got some mention in the <a href="https://thenextweb.com/news/awwapp-a-fun-html5-app-for-drawing-with-friends-and-it-works-great-with-touchscreens" rel="nofollow">press</a> and received positive feedback, including on <a href="https://news.ycombinator.com/item?id=2886353" rel="nofollow">Hacker News</a> and Reddit. There weren&#39;t many users back then (the whole thing comfortably fit in RAM of a rather small VPS), but each time users gathered to draw together, or shared a saved drawing on social media, more people heard about AWW.</p>

<p>After the launch, I maintained AWW but treated it as a side hobby, and for a couple of years most of my focus was on other things.</p>

<h3 id="embedding" id="embedding">Embedding</h3>

<p>One thing I did from the start is organize AWW as whiteboard widget that could be embedded into other sites, with awwapp.com being just one of the sites that embedded it.</p>

<p>When I started getting requests to let people add it to their sites, it was easy to do. I set up a PayPal subscribe button and wrote up a simple doc on how to embed the widget. When someone subscribed, I&#39;d add them to the accounts “database” (actually a JSON file) and gave them an unique API key to embed with.</p>

<p>To add whiteboard to their sites, my customers would include jQuery and add the following snippet:</p>

<pre><code>&lt;div id=&#34;wrapper&#34; style=&#34;width: 800px; height: 600px;&#34;&gt;&lt;/div&gt;
&lt;script src=&#34;http://static.awwapp.com/plugin/1.0/aww.min.js&#34;&gt;&lt;/script&gt;
&lt;script&gt;
    $(&#39;#wrapper&#39;).awwCanvas({apiKey: YOUR_API_KEY});
&lt;/script&gt;
</code></pre>

<p>After a time, this netted me a few hundred bucks a month, which was nice but nothing I&#39;d base my business on – or so I thought.</p>

<p>I also started getting more and more serious questions about using the whiteboard on awwapp.com for business, tutoring, etc, as well as feature requests. Most of my customers were tutors using it together with Skype to provide online lessons to their students. They often didn&#39;t have a site to embed the whiteboard on, were happy to use awwapp.com version and willing to pay for it.</p>

<h3 id="the-rewrite" id="the-rewrite">The rewrite</h3>

<p>As I started treating AWW as a viable business in its own right, I started thinking about rewriting it.</p>

<p>The old JSON file approach was getting unwieldy, as was the fact that all active board were being kept in memory, limiting growth. Finally, the old nowjs framework was unmaintained meaning we were  stuck on an extremely old Node version (0.6 if I remember correctly).</p>

<p>I decided on a full rewrite with redesigned architecture and using modern (for the time!) JavaScript features – stuff like promises, the socket.io library and using code bundler tools. For database we selected MongoDB, primarily for ease of use and ease of clustering/failover. We also switched from a single server (and thus single-point-of-failure) to a load-balanced system with full failover (using haproxy for web server load balancing).</p>

<p>This was an explosion in ops complexity. Mostly warranted, but I definitely over-engineered some things, and hit many of the novice mistakes when scaling out.</p>

<p>We started the rewrite in the summer of 2014 and deployed the new version in spring of 2015. I say “we” because I had help – this was now a proper project internal to my dev agency with more people involved. However it still wasn&#39;t a priority and was done at off-times and between client projects.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/aww2-rewrite.png" alt="The rewrite looked similar to the original"/></p>

<p>With the new version, we were finally able to sign up users directly on awwapp.com, again using PayPal subscriptions. We followed the freemium model, in which most of the functionality was free, but you could pay for some additionally useful features.</p>

<h3 id="ease-of-use" id="ease-of-use">Ease of use</h3>

<p>Throughout entire lifetime of AWW, I had one major focus, sometimes to the detriment of other aspects of the app: it had to be easy to use. <em>Really</em> easy to use. So easy a child could use it.</p>

<p>We did that successfully – in fact, many children <em>did</em> use it in kindergarten and preschool. It was great seeing pictures teachers shared of kids doing a collaborative class, or exploring maths using our whiteboard. “This is why I&#39;m doing this”, I&#39;d say.</p>

<p>For example, to start drawing on AWW you only had to click one button, labelled <em>Start drawing</em>, and less than a second later, you&#39;re drawing. And the only reason there was some text and a button to click first, was so people coming first time to AWW knew what the site was about! Before this, people coming up on an empty board were sometimes confusing what that was for.</p>

<h3 id="word-of-mouth" id="word-of-mouth">Word of mouth</h3>

<p>One thing we didn&#39;t do (except in a small experiment at the start with StumbleUpon) until much later, is promote or advertise AWW. In fact, I did hardly any outreach at all.</p>

<p>Virtually all of the (slow and steady) growth was entirely through word of mouth – our users telling others about us. This was helped by the fact that AWW is a collaboration platform, so users naturally wanted to share the drawing activity or results with others.</p>

<p>Combined with the ease of use and the fact most of the features were free, this made AWW a favorite whiteboard for many teachers and tutors. They were telling each other about it, featuring it in teacher seminars and doing video tutorials on it! This showed me how important and useful AWW was for a growing number of people and really warmed my heart.</p>

<h3 id="spinning-off-into-a-separate-startup" id="spinning-off-into-a-separate-startup">Spinning off into a separate startup</h3>

<p>Usage slowly but steadily grew over the years, to the point that there was simply too much to do on maintenance, customer support, inbound sales and further development, to treat AWW as a side-project.</p>

<p>I also realized that, as a technical person, I should get new people on board that will have “skin in the game”, a fresh perspective and complementary set of skills such as business development, sales.</p>

<p>I met <a href="https://www.linkedin.com/in/zvonimirsabljic/" rel="nofollow">Zvonimir</a> at local startup event where he was just describing the challenges of his previous startup attempt. Zvonimir had tech chops and biz acumen in equal amount, combined with startup experience, but what I most liked was the levelheadedness when thinking about the realities of building (and selling!) a product in an uncertain environment.</p>

<p>We hit it off immediately and after some time I managed to convince him to join me as a cofounder. I consider this to be the best decision (and best sale!) I made throughout the development on AWW.</p>

<p>We spun off AWW as a separate startup in late 2016, incorporated it in the US using <a href="https://stripe.com/atlas" rel="nofollow">Stripe Atlas</a>, brought several more people on board (a developer, a designer, and a marketing/support person) and set to work. We had three priorities:</p>

<h3 id="the-plan" id="the-plan">The plan</h3>

<p>First, to figure out who our users were, why they used us and what we can do to make us more useful to them. This involved identifying different markets we were playing in, such as EdTech, enterprise collaboration, etc. Up until that point we “lucked out” in that our product was useful – now we set to deliberately discover our customers and optimize our approach (feature set and messaging) towards them.</p>

<p>Secondly, to improve our user experience. AWW was simple and easy to use, but not so simple to figure out all the capabilities it had, or indeed why you&#39;d want to pay for it. The more advanced “premium” functionality was hidden away unless you were a premium user, so many of our free users didn&#39;t even realize there was more to it.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/aww-ui-rewrite.png" alt="The new UI"/></p>

<p>Also, as our features and supported use cases grew (and were expected to grow much further), bolting them onto the old interface made it more and more clunky over time. We set out to rethink the UI from scratch, still respecting our old imperatives (simplicity) but incorporating the lessons learned.</p>

<p>Third priority was to start working on our marketing, sales and business development, and to improve our customer support.</p>

<p>Up until then, customer support meant me answering email questions with a few days&#39; delay. We aimed to lower the reply time to within a business day and to be active on more channels, such as Twitter and Facebook.</p>

<p>The online presence tied in to marketing. We didn&#39;t do any advertising, aside from a few experiments to test the waters and measure potential interest for various whiteboarding use cases. However, we did want to be present on Twitter, Facebook, LinkedIn, reaching out to blogs reviewing us, talk with teachers recommending us in workshops, and this tied in nicely with the customer relations part of the plan.</p>

<h3 id="the-funding" id="the-funding">The funding</h3>

<p>Now we were in business. We had several people and everyone worked exclusively on AWW, except me. I spent half my time on AWW and half managing my software development agency <a href="https://goodcode.io/" rel="nofollow">GoodCode</a>.</p>

<p>Even though Croatian salaries are nowhere near US (and especially Silicon Valley) ones, having more people still meant more expenses. What was previously a profitable side project now turned into a startup that wasn&#39;t yet <a href="http://www.paulgraham.com/ramenprofitable.html" rel="nofollow">ramen profitable</a>. I was funding the startup from the profits from my agency so we didn&#39;t need to worry about runway for the time being, but were still open to investments.</p>

<p>Our rough plan was to ramp up the product and get to profitability within twelwe months. Of course we hoped for much more – our valiant efforts will result in a hockey-stick growth and back to black in three months and then sky&#39;s the limit!</p>

<p>That, of course, didn&#39;t happen.</p>

<h3 id="yc-school" id="yc-school">YC School</h3>

<p>At that time we applied to several accelerators, including <a href="https://www.ycombinator.com/" rel="nofollow">YC</a>, but didn&#39;t get in. Though one of the reasons was for funding, our primary reason was the advice, learning experience, networking and fresh ideas.</p>

<p>We also applied to the first <a href="https://www.startupschool.org/" rel="nofollow">YC Startup School</a> in 2017 and did get in! The online MOOC-style course combining online video lectures, weekly check-ins, discussions with mentors and our fellow founders were phenomenal. I imagine the intensity is nowhere near a “full” YC experience, but it was tremendously helpful for us.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/startup-school-certificate.png" alt="YC Startup School Completion Certificate"/></p>

<p>First, the weekly check-ins did wonders for our accountability. We had metrics before and we did look at what we&#39;ve achieved (or not) previously, but not in as critical and clear way we had to for Startup School. We soon found out our old non-actionable vanity metrics were pretty much worthless and that we didn&#39;t even have an easy way to get to the right numbers. Focusing on the real was like wiping your glasses (or the window) and <em>actually</em> seeing what was going on.</p>

<p>Weekly discussions were as valuable. We received (and gave out) thoughtful and probing questions which made us reflect on our implicit assumptions and ways of thinking. Here&#39;s an example: late in the batch, we were talking about monetization problem since the vast majority of our users were free users. We didn&#39;t want to turn cripple AWW free functionality but couldn&#39;t find a compelling reason for many of them to upgrade. Our mentor asked us point-blank: “I know you&#39;re against putting ads there, but why not give it a shot and see what happens?”</p>

<h3 id="the-ads" id="the-ads">The ads</h3>

<p>This wasn&#39;t new idea. This was pretty old idea that I shot down time and again because as a consumer I dislike ads, especially the creepy follow-you-around targeted kind, and I subscribe to the “if you&#39;re not paying for the product, you&#39;re the product” line of thought. I also thought ads would clutter useful screen estate – on a whiteboard, you want the whiteboard area to be as large as possible. So I always dismissed this idea without a second though, and even my cofounder didn&#39;t convince me otherwise.</p>

<p>But this question at the Startup School office hours got us thinking. Can we do it in a way that wouldn&#39;t screw up the experience? Perhaps. We didn&#39;t jump to it straight away but the idea kept simmering in the back of our minds.</p>

<p>Finally, in the spring of 2018, we gave it a go. The idea was simple: free users get ads. Subscribe and besides getting all the premium features, you also get rid of ads. The rationale was that a lot of people don&#39;t actually mind ads for free stuff and for those that do, well, there&#39;s an easy way to remove them. This sounds like such a simple concept but it took us a lot of time to internalize all the rationale and the consequences of this decision.</p>

<p>We implemented Google Adsense in early 2018 and after a few weeks of wildly erratic earnings, it settled down to a pretty low value, with a <em>lot</em> of traffic showing as invalid. To this day I&#39;m not exactly sure what was the reason.</p>

<p>I suspect it was some combination of us testing the ads on only 10% of users to begin with, screwing up the ad placement implementation (there were initial app modal dialogs that were showing above the ads in the first few seconds), and ad rotation (our users were on the same page for extended period of time and showing the same ad for 15 minutes makes no sense).</p>

<p>One by one we fixed these perceived problems. It took very long time for the invalid traffic ratio to lower down (from 50% of ads shown to some 10% of total).</p>

<p>In the end, ads were bringing in about a fifth of our revenue, but they also increased our conversion rate (free to paying users), and haven&#39;t materially dented our growth (users leaving in frustration because of ads was my biggest fear).</p>

<h3 id="chasm" id="chasm">Chasm</h3>

<p>Late 2018 was a rough time for AWW. About two years of talking to customers, building, measuring, revamping the product and the UI didn&#39;t produce the hockey-stick we were aiming for. Not even the “reasonable” 4x-5x growth we expected at the least. We <em>were</em> growing – but very slowly.</p>

<p>What did we do wrong? Maybe we didn&#39;t focus narrowly enough (we still catered to EdTech, small business users, other apps embedding us and even had an on-premises enterprise offering). Maybe we weren&#39;t aggressive enough in customer acquisition (inbound sales only, no advertising). Me spending just half of my time on the startup certainly didn&#39;t help.</p>

<p>We had some tough decisions to make. After a lot of soul searching we decided to trim down, letting everyone go, with Zvonimir and me staying to work on it part-time. Most of our time we spent on customer relations, inbound sales and maintaining the service.</p>

<p>Without our noticing it, the service <em>still grew</em>, though. In retrospect, we did good work on a number of things back in 2017, it just took a lot more time for the effects to become visible. By the end of 2019 we approached triple the revenue numbers from 2017 (on a slower user growth).</p>

<h3 id="preparing-to-sell" id="preparing-to-sell">Preparing to sell</h3>

<p>Okay so it wasn&#39;t a rocket to the moon we hoped for, but AWW was still a nice little side business. Trouble was, both of us also had other businesses to attend to and couldn&#39;t keep up focus both on the new ventures and AWW indefinitely – something had to go. We also didn&#39;t want to shut it down, so we started thinking about selling it.</p>

<p>We researched and interviewed several online business brokers and decided to go with <a href="https://feinternational.com/" rel="nofollow">FE International</a>, partly on the strength of <a href="https://www.kalzumeus.com/" rel="nofollow">patio11&#39;s</a> recommendations.</p>

<p>Preparations for going to market involved us compiling detailed financials for the previous few years, as well as answering a questionnaire about the business itself, the customers, competition, reasons for selling, and so on.</p>

<p>After we provided this information, FEI prepared a go-to-market prospectus (a sales brochure), we okayed it and they started reaching to their network. This preparation took several months, so by the start of March 2020. we were on the market.</p>

<p>In two weeks, everything changed.</p>

<h3 id="the-world-moves-online" id="the-world-moves-online">The world moves online</h3>

<p>With the pandemic surging everywhere, much of the western world started shutting down or going online. Schools started adapting their curriculum for online, businesses went full work-from-home. Everyone was on Zoom, Teams, and Meet.</p>

<p>Every online productivity and collaboration tool imaginable started seeing their usage numbers surge. This also happened to us. Teachers, school district IT departments, businesses, started signing up in droves.</p>

<p>Our user numbers, customer numbers, number of shared whiteboards, revenue – basically every important metric – skyrocketed. We were up 6x (that&#39;s 600%) by end of April compared to mid-March, and the growth didn&#39;t stop there.</p>

<p>We were profiting from the situation, but also wanted to help. We set up a policy to give free premium access to any educational institution until the end of the school year. Besides being a decent thing to do, we figured (correctly) it would convince many schools to stay with us the next year.</p>

<h3 id="the-mad-it-scramble" id="the-mad-it-scramble">The mad IT scramble</h3>

<p>Our systems were woefully unprovisioned for the sudden increase in load. We started provisioning new servers left and right but soon found out there were bottlenecks in how the system was organized that prevented taking full advantage of horizontal scaling.</p>

<p>Examples were having many smaller (unsharded) MongoDB replicas in a set instead of using fewer larger-capacity servers, having database interconnect latencies that caused MongoDB queries to be dispatched unevenly across the servers.</p>

<p>Then there were the sloppy things that went unnoticed for years on a smaller load, such as not cleaning up Redis temporary keys quickly enough (leading to our Redis servers quickly using up any amount of memory we threw at them),
using a few database queries where a single (complex) one might do the trick, or our indexes not matching the actual usage patterns.</p>

<p>And then there were the subtle bugs and edge cases. As the saying goes, if you have a million users, million-in-one bugs are a daily occurrence. Although AWW is designed to gracefully withstand shorter connection problems, this didn&#39;t always work flawlessly and sometimes even increased the problem by hammering the servers too much when they were already under heavy load.</p>

<p>Our platform providers had their own woes. With everyone buying or renting any capacity they could get their hands on, the platforms themselves become less stable. We were hosted on <a href="https://www.digitalocean.com/" rel="nofollow">Digital Ocean</a> and although they&#39;re in general great as a platform (and I have virtually all my other stuff there), we were plagued for months by strange intermittent issues where a system of ours would just lock up under moderate load.</p>

<p>Lots of investigation, together with the engineering team on Digital Ocean&#39;s side, yielded no specific results, except that issues were related to the “shared” type droplets (servers) we were mostly using at the time. Switching to “dedicated” droplets helped a bit, but didn&#39;t eliminate the issue. Whether it was something we did, or there was a just a capacity and noisy neighbor problem, we&#39;ll never know. In the end, we resulted to rebooting any droplets showing signs of being locked up.</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/droplet-not-responding.png" alt="Droplet rebooting"/></p>

<h3 id="growing-the-team" id="growing-the-team">Growing the team</h3>

<p>Everyone has their “what did you do when the pandemic hit” story. Mine was holed up in the attic-turned-home-office in front of a computer, madly trying to keep the system running, both excited and terrified. Days were a blur – I don&#39;t remember much of anything else until some time at the end of April.</p>

<p>When the pandemic hit, both Zvonimir and me had other work obligations. We couldn&#39;t just drop those at a moment&#39;s notice, so for weeks we put in required hours on those obligations and every other waking moment on AWW.</p>

<p>It still wasn&#39;t enough. Especially the support tickets (combining the questions about our service and signing up, and problem reports) were overwhelming. I remember powering through them late at night, going to sleep, waking up early morning (usually by some server alarm going off) and then seeing another few hours&#39; worth of tickets already waiting.</p>

<p>It took us about a week to decide to grow the team, and then find, hire and onboard a dedicated support person. She was able to handle most of the questions from users while escalating to us stuff like bug reports, bizdev/sales questions and other more complex inquiries.</p>

<p>This finally gave us some breathing room and ability to focus on the business (Zvonimir) and tech (myself) matters. In the coming months we hired another support person, two developers and a part-time designer.</p>

<h3 id="growing-the-business" id="growing-the-business">Growing the business</h3>

<p>While I was busy scaling up our tech, Zvonimir spent most of his time on Zoom calls, talking to our customers and prospective customers. Whereas previously most of our users and customers were individuals, now organizations of varying sizes were knocking on our door.</p>

<p>We tweaked our bigger pricing plans to better match what people were asking us for. With bigger enterprises now knocking on our door, things like service-level agreements (SLAs) and standard operating procedure (SOP) documentation needed to be set up and addressed before bigger companies could roll us out to their teams.</p>

<p>All of this took enormous amount of time. Deals still took weeks or months from initial contact to implementation. All of it was inbound interest but there was a lot of work for every bigger deal. Had we had a dedicated smooth-running biz team, I&#39;m sure we would have grown twice as fast or more.</p>

<p>No matter. Even with a small team, by end of 2020 we&#39;ve grown 10x, and that includes lull summer months when most of the EdTech activity is subdued.</p>

<h3 id="to-sell-or-not-to-sell" id="to-sell-or-not-to-sell">To sell or not to sell?</h3>

<p>Our explosive growth started just as were about to put ourselves on the market. Almost immediately, the old numbers didn&#39;t make sense. Users, revenue, growth numbers were nothing like the ones used to calculate a (target) sale price.</p>

<p>In the next several weeks we did receive several offers that would&#39;ve been interesting before the growth started, but didn&#39;t make sense in the new environment. We decided to put the sale on pause until the numbers became more predictable and for the next few months didn&#39;t think about it.</p>

<p>By mid-summer the growth stabilized – we were still growing, but at a bit more predictable pace – so we revisited the idea to sell, as we still had a (paused) contract with our brokers.</p>

<p>Based on our experience, selling via brokers is easier if there&#39;s a flat or consistent small growth where future can be easily extrapolated. Coming up with the price then involves tallying up income and expenses and multiplying with a factor you come up from experience and rules of thumb.</p>

<p>In our case, growth was triple-digit, virtually all of the expenses were investments into growth so this pricing model breaks down. We adapted the model, updated the numbers and went to market again. Zvonimir had by that time learned the nitty-gritty details of company valuation and had actually provided the final calculation himself.</p>

<p>Ultimately, we haven&#39;t received any credible offers for the updated price and since we weren&#39;t under any pressure to sell, we decided not to.</p>

<h3 id="miro" id="miro">Miro</h3>

<p>Earlier in 2020, Miro had <a href="https://techcrunch.com/2020/04/23/miro-lands-50m-series-b-for-digital-whiteboard-as-demand-surges/" rel="nofollow">raised $50M in a Series B funding round</a> to fuel their growth. At some point their reached out to us to see if we could cooperate and by the autumn the topic turned to the possibility of them acquiring us.</p>

<p>In contrast to earlier potential buyers, Miro was in the same niche as we were, with a similar product and was also a technological company. We liked the team and their product and felt Miro could be a good home for AWW. Also, the price was right based on our revenue, users and growth numbers.</p>

<p>It took some time to negotiate the details of the deal and go through due dilligence process. We went over dozens of drafts for both LOI (Letter of Intent) and the actual purchase agreement – although we had a “gentleman&#39;s agreement” from the business perspective, there were still a lot of details to iron out.</p>

<p>While we obviously had our own lawyers, tax advisors and accountants (on both Croatian and US side!), we wanted to be hands-on and understand everything, so by the end of it both Zvonimir and me read cumulatively hundreds of pages of US contract legalese.</p>

<p>Due dilligence also got complicated: although AWW, Inc. was a US company, both Zvonimir and me were residents of Croatia and we had contractors from all over the place, not to mention the original intellectual property (IP) traced back to my development agency (Good Code) from the times it was just another project there. Although we had IP purchase paper trail, some of it was in Croatian and had to be translated, and the lawyers wanted to triple-check the rest.</p>

<p>Finally, the deal closed and on February 22nd, 2021. we publicly announced that AWW is joining Miro!</p>

<p><img src="https://s3.amazonaws.com/vault.senko.net/awwapp/AWW-Miro.png" alt="AWW joins Miro"/></p>

<p>Since both AWW and Miro use custom in-house built whiteboarding functionality, there wasn&#39;t much sense in continuing to develop both products, and Miro decided to wind the technology powering AWW down.</p>

<p>So we set out to migrate existing customers: for enterprise customers there was more work involved on our side, while smaller users received 1 year free equivalent subscription to Miro and guidance on how to export their AWW whiteboards and upload them to Miro. All users were given 5 months to do the transition, and we timed it such that education organizations were able to finish their school year on AWW.</p>

<p>Miro also took our experience and lessons learnt from AWW in designing and developing a <a href="https://webwhiteboard.com/" rel="nofollow">free version</a>, so although no AWW code is directly used, our focus on simplicity, immediacy and ease of use continues.</p>

<h3 id="looking-back" id="looking-back">Looking back</h3>

<p>If a time-traveler told me ten years ago how the story would unfold, I would laugh them off. What started as a hobby experiment evolved into one of the global leaders in its niche, was used by millions and changed our lives forever.</p>

<p>I like to think that in a tiny way we improved the lives of all our users – be it a kid doodling something in a collaborative drawing class, a roofing company making it easier for their customers to sketch what they need, or a developer being able to discuss software architecture on a confcall – we made those moments easier and more fun.</p>

<p>And hey, what more can you ask for.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/the-story-of-a-web-whiteboard</guid>
      <pubDate>Thu, 16 Sep 2021 12:44:46 +0000</pubDate>
    </item>
    <item>
      <title>Bridges vs Apps</title>
      <link>https://blog.senko.net/bridges-vs-apps?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Is software development a proper engineering discipline? On one hand, it looks like it should be - computer science is a rigorous, hard, math-based science. The hardware is deterministic.&#xA;&#xA;As anyone involved with the discipline will attest, though, it&#39;s sorely lacking. Just Google for &#34;if programmers made ...&#34; or &#34;if software engineers built ...&#34; for the cheeky, but not entirely untrue, comparisons.&#xA;&#xA;Why are we so bad at it? Is it because we&#39;re really struggling to be something we&#39;re not? Or are there other reasons we&#39;re failing?&#xA;&#xA;Let&#39;s do a cheeky comparison of our own.&#xA;&#xA;Bridging the gap&#xA;&#xA;Suppose you want to build a bridge. Right off the bat, you know a few things:&#xA;&#xA;which two places across the water you want to connect&#xA;how many lanes and of which type (car, train, bike, pedestrian) you want to support&#xA;terrain on both sides and the weather patterns&#xA;typical building materials for the bridge and their properties&#xA;&#xA;Based on this, you can design the bridge architecture. It&#39;ll probably go through a few iterations - maybe number of lanes will change based on the cost projections.&#xA;&#xA;During the design, you&#39;ll never:&#xA;&#xA;cheap out on the architects - it&#39;s vital you get this right and the design phase cost is small compared to the overall project budget&#xA;rush the design phase - most of the calendar time is spent chasing around stakeholders (getting people to decide on building a bridge in the first place, chasing permits, etc) so there&#39;s no need to rush the architects&#xA;&#xA;Once everyone signs off on the plans, you know exactly how and what to build, down to the bill of materials. Then you go and build the bridge.&#xA;&#xA;Once it is built, it is done, except for regular maintenance. The terrain doesn&#39;t change, the materials don&#39;t change their properties, and you definitely don&#39;t attempt to move the bridge or add another lane in a year.&#xA;&#xA;You also don&#39;t expect terrorists to be attacking the bridge, and you usually don&#39;t  account for once-in-a-thousand-years natural disasters. If you try to, you&#39;ll be accused (correctly) of over-engineering and being insecure. There are standard safety factors and you don&#39;t need to go over them except for a very good reason.&#xA;&#xA;I&#39;m not a civil engineering myself (I have a bit of background in mechanical engineering), but I don&#39;t think any would find the above description and assumptions wrong. Simplistic, yes, wrong, probably not.&#xA;&#xA;In contrast, in software development you often:&#xA;&#xA;don&#39;t know the exact requirements, or they will change throughout the development process&#xA;want to keep the software as configurable and adaptable as possible (no hard limits on the number of users or amount of data, for example)&#xA;can&#39;t account for all the differences in the platforms your software will run in (either native desktop/mobile, or across web browsers)&#xA;deal with platforms or frameworks that change rapidly and in a ways that make you rethink the design of your software&#xA;need to interact with poorly specified and/or poorly working 3rd party components (even if they work great, network stability can always be a problem)&#xA;&#xA;What&#39;s more, with software:&#xA;&#xA;we expect to need it to change - either to evolving business realities or to keep up with the underlying platform changes&#xA;we expect it to be reasonably robust against malicious attacks&#xA;&#xA;In contrast to civil or mechanical engineering, when we talk about &#34;designing the solution&#34; in software development, we talk about having a high-level architecture with roughly fleshed-out components (what they are, what they need to interact with). It&#39;s not even close to the level of detail the architecture plans give the bridge builders.&#xA;&#xA;This means that the design phase in civil engineering doesn&#39;t map to solution design in software development - it actually maps to the entire programming effort, which is the bulk of the work, cost and time in any software development project.&#xA;&#xA;        +------------+-----------------------------------+&#xA;BRIDGE  |   Design   |              Building             |&#xA;        +------------+------------------------+----------+&#xA; APP    |             Programming             |   Ops    |&#xA;        +-------------------------------------+----------+&#xA;&#xA;Copying bridges&#xA;&#xA;Ok, you build one bridge. How hard is to build another just like it? Well, you repeat the whole process again in its entirety. Hopefully you do reap some savings due to the fact that you&#39;re now more experienced and have established supply chain connections, but that&#39;s it. The majority of work is still there and if everything else stays the same, the budget won&#39;t be very different.&#xA;&#xA;In software development that&#39;s  - you just make (or deploy) a new copy. Done!&#xA;&#xA;What does this mean for builders themselves? This means in civil engineering you get more money for building (executing on the plan), while in software development you get more money for developing the solution (the design part). You just don&#39;t hear about multinational mega-corporations of architects on a level with FAANG.&#xA;&#xA;Interestingly, the multinational mega-corporations handling ops do exist (PaaS, IaaS, app stores). They combine infrastructure, distribution and operations - in our comparison, in the civil engineering world they&#39;d be an amalgam of builders, utilities and shipping companies all fused into one.&#xA;&#xA;Cost of failure&#xA;&#xA;If a bridge collapses, that&#39;s a huge problem, even if noone gets hurt. If software malfunctions, in most cases it&#39;s not a big deal (sometimes it is and then it&#39;s newsworthy). More often, if it works but badly, people scrape by.&#xA;&#xA;Nowadays you can even fix the bug and push an over-the-air update - no need to recall all the cars of planes you&#39;ve built with the defect.&#xA;&#xA;This means the cost of failure (either catastrophic, or failure to live up to the expectations) is much lower in software than in either civil or mechanical engineering. That again means the return on investment (ROI) on any effort mitigating that cost is small, discouraging spending time and money on it.&#xA;&#xA;So software today is buggy becase it can be. It&#39;s slow because it can be. It&#39;s insecure because it can be. It&#39;s not due to any lazyness of programmers, incompetence of managers or malice of big bosses. It&#39;s how the world works - if you spend too much on quality where not required, your competitor can spend the same money to grow faster with no downside.&#xA;&#xA;Yet it is not obvious that it is unequivocally a bad thing. Consider the failures of famous projects with (near) unlimited budgets and perfectionism, such as Duke Nukem Forever. It makes as little sense to overengineer a software product as it would be for a building. Tech moves at breathtaking speed, and we&#39;ve come so far in just a few decades, because it often can &#34;move fast and break things&#34;. Software can be more adaptable and quicker to the changing needs. &#xA;&#xA;As software becomes a vital part of more and more things in the real world, the costs of failure rise. Whereas you might be only slightly annoyed at having to restart your Facebook app, you really don&#39;t want any trouble from the software controling your two-tonne car while you&#39;re doing 80 on a highway.&#xA;&#xA;If that means more software will need to move slowly and avoid breaking things. And maybe we&#39;ll be more like the bridge builders.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Is software development a proper engineering discipline? On one hand, it looks like it should be – computer science is a rigorous, hard, math-based science. The hardware is deterministic.</p>

<p>As anyone involved with the discipline will attest, though, it&#39;s sorely lacking. Just Google for “if programmers made ...” or “if software engineers built ...” for the cheeky, but not entirely untrue, comparisons.</p>

<p>Why are we so bad at it? Is it because we&#39;re really struggling to be something we&#39;re not? Or are there other reasons we&#39;re failing?</p>

<p>Let&#39;s do a cheeky comparison of our own.</p>

<h3 id="bridging-the-gap" id="bridging-the-gap">Bridging the gap</h3>

<p>Suppose you want to build a bridge. Right off the bat, you know a few things:</p>
<ul><li>which two places across the water you want to connect</li>
<li>how many lanes and of which type (car, train, bike, pedestrian) you want to support</li>
<li>terrain on both sides and the weather patterns</li>
<li>typical building materials for the bridge and their properties</li></ul>

<p>Based on this, you can design the bridge architecture. It&#39;ll probably go through a few iterations – maybe number of lanes will change based on the cost projections.</p>

<p>During the design, you&#39;ll never:</p>
<ul><li>cheap out on the architects – it&#39;s vital you get this right and the design phase cost is small compared to the overall project budget</li>
<li>rush the design phase – most of the calendar time is spent chasing around stakeholders (getting people to decide on building a bridge in the first place, chasing permits, etc) so there&#39;s no need to rush the architects</li></ul>

<p>Once everyone signs off on the plans, you know <em>exactly</em> how and what to build, down to the bill of materials. Then you go and build the bridge.</p>

<p>Once it is built, it is <em>done</em>, except for regular maintenance. The terrain doesn&#39;t change, the materials don&#39;t change their properties, and you definitely don&#39;t attempt to move the bridge or add another lane in a year.</p>

<p>You also don&#39;t expect terrorists to be attacking the bridge, and you usually don&#39;t  account for once-in-a-thousand-years natural disasters. If you try to, you&#39;ll be accused (correctly) of over-engineering and being insecure. There are standard safety factors and you don&#39;t need to go over them except for a very good reason.</p>

<p>I&#39;m not a civil engineering myself (I have a bit of background in mechanical engineering), but I don&#39;t think any would find the above description and assumptions wrong. Simplistic, yes, wrong, probably not.</p>

<p>In contrast, in software development you often:</p>
<ul><li>don&#39;t know the exact requirements, or they will change throughout the development process</li>
<li>want to keep the software as configurable and adaptable as possible (no hard limits on the number of users or amount of data, for example)</li>
<li>can&#39;t account for all the differences in the platforms your software will run in (either native desktop/mobile, or across web browsers)</li>
<li>deal with platforms or frameworks that change rapidly and in a ways that make you rethink the design of your software</li>
<li>need to interact with poorly specified and/or poorly working 3rd party components (even if they work great, network stability can always be a problem)</li></ul>

<p>What&#39;s more, with software:</p>
<ul><li>we expect to need it to change – either to evolving business realities or to keep up with the underlying platform changes</li>
<li>we expect it to be reasonably robust against malicious attacks</li></ul>

<p>In contrast to civil or mechanical engineering, when we talk about “designing the solution” in software development, we talk about having a high-level architecture with roughly fleshed-out components (what they are, what they need to interact with). It&#39;s not even close to the level of detail the architecture plans give the bridge builders.</p>

<p>This means that the design phase in civil engineering doesn&#39;t map to solution design in software development – it actually maps to the entire programming effort, which is the bulk of the work, cost and time in any software development project.</p>

<pre><code>        +------------+-----------------------------------+
BRIDGE  |   Design   |              Building             |
        +------------+------------------------+----------+
 APP    |             Programming             |   Ops    |
        +-------------------------------------+----------+
</code></pre>

<h3 id="copying-bridges" id="copying-bridges">Copying bridges</h3>

<p>Ok, you build one bridge. How hard is to build another just like it? Well, you repeat the whole process again in its entirety. Hopefully you do reap some savings due to the fact that you&#39;re now more experienced and have established supply chain connections, but that&#39;s it. The majority of work is still there and if everything else stays the same, the budget won&#39;t be very different.</p>

<p>In software development that&#39;s  – you just make (or deploy) a new copy. Done!</p>

<p>What does this mean for builders themselves? This means in civil engineering you get more money for building (executing on the plan), while in software development you get more money for developing the solution (the design part). You just don&#39;t hear about multinational mega-corporations of architects on a level with FAANG.</p>

<p>Interestingly, the multinational mega-corporations handling ops do exist (PaaS, IaaS, app stores). They combine infrastructure, distribution and operations – in our comparison, in the civil engineering world they&#39;d be an amalgam of builders, utilities and shipping companies all fused into one.</p>

<h3 id="cost-of-failure" id="cost-of-failure">Cost of failure</h3>

<p>If a bridge collapses, that&#39;s a huge problem, even if noone gets hurt. If software malfunctions, in most cases it&#39;s not a big deal (sometimes it is and then it&#39;s newsworthy). More often, if it works but badly, people scrape by.</p>

<p>Nowadays you can even fix the bug and push an over-the-air update – no need to recall all the cars of planes you&#39;ve built with the defect.</p>

<p>This means the cost of failure (either catastrophic, or failure to live up to the expectations) is much lower in software than in either civil or mechanical engineering. That again means the return on investment (ROI) on any effort mitigating that cost is small, discouraging spending time and money on it.</p>

<p>So software today is buggy becase it can be. It&#39;s slow because it can be. It&#39;s insecure because it can be. It&#39;s not due to any lazyness of programmers, incompetence of managers or malice of big bosses. It&#39;s how the world works – if you spend too much on quality where not required, your competitor can spend the same money to grow faster with no downside.</p>

<p>Yet it is not obvious that it is unequivocally a bad thing. Consider the failures of famous projects with (near) unlimited budgets and perfectionism, such as <a href="https://en.wikipedia.org/wiki/Development_of_Duke_Nukem_Forever" rel="nofollow">Duke Nukem Forever</a>. It makes as little sense to overengineer a software product as it would be for a building. Tech moves at breathtaking speed, and we&#39;ve come so far in just a few decades, because it often <em>can</em> “move fast and break things”. Software <em>can</em> be more adaptable and quicker to the changing needs.</p>

<p>As software becomes a vital part of more and more things in the real world, the costs of failure rise. Whereas you might be only slightly annoyed at having to restart your Facebook app, you really don&#39;t want any trouble from the software controling your two-tonne car while you&#39;re doing 80 on a highway.</p>

<p>If that means more software will need to move slowly and avoid breaking things. And maybe we&#39;ll be more like the bridge builders.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/bridges-vs-apps</guid>
      <pubDate>Fri, 23 Jul 2021 06:45:09 +0000</pubDate>
    </item>
    <item>
      <title>Digital hygiene</title>
      <link>https://blog.senko.net/digital-hygiene?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Internet, in 2018, was not a safe place.&#xA;&#xA;By this I don’t mean spam arriving in our inbox, viruses or malware lurking in software downloaded from less-reputable places, or phishing sites masquerading as our favorite e-commerce platforms.&#xA;&#xA;These risks are real, but well understood and widely recognized. However, in the past years there has been an increasing evidence for, and occurrence of, completely different kinds of risk that most of us online are exposed to.&#xA;&#xA;Examples of these are pervasive tracking of behavior online, appropriation of personal data by the apps or sites we use, data breaches, and junk media optimized to maximize engagement.&#xA;&#xA;Before I go over each of these in more detail, a disclaimer: I don’t think everyone’s out to get me, or that big corporations such as Google or Facebook are inherently evil. I do think that companies, big and small, are incentivized to behave in ways that create or increase these risks. That is, the default is to behave in a way that makes things worse.!--more--&#xA;&#xA;Tracking&#xA;&#xA;Start with tracking. Google and Facebook know every page you visit if it has Facebook or Google login, social or like buttons, embeds fonts or maps, uses Google Analytics or any of their dozens of APIs. So do the ad networks: a handful of major ones are used on most sites, and they track unique users so they can build your profile, optimize ad inventory that you see and retarget you. This means they follow you around the internet to show you ads for products you viewed but haven’t bought yet.&#xA;&#xA;Google and Facebook, the portals to the online world for many, know the most about us. But they are not unique in this regard: companies such as Twitter, Amazon and virtually everyone else does it as well. &#xA;&#xA;Is this really a problem? I believe so. I personally don’t like my privacy being violated at will by a random site I happen to visit. On a practical level, I understand that the companies collecting this data aim to maximize their shareholders’ value, not my benefit. While some amount of tracking is acceptable to improve the service I get — and people may have different notion of what’s acceptable to them — there should be a way to draw the line somewhere instead of going full-in.&#xA;&#xA;Tracking can be countered by using an ad blocker, such as uBlock Origin or AdBlock Plus. Today’s ad blockers do more than just block annoying ads: they also disrupt all kinds of invasive tracking, and can be integrated in all modern browsers and mobile devices. This approach does have a side-effect of blocking ads too, depriving sites of revenue. However, at this point I don’t think browsing the web is at all viable without an ad blocker. To put it bluntly, the experience is horrible.&#xA;&#xA;I also use DuckDuckGo, an alternative search engine with a focus on privacy and usability. Its results are usually slightly worse than Google’s, but it does have a few extra tricks up its sleeve (such as direct searching of a specific sites) and it’s easy to fall back to Google, so it’s tradeoff I’m willing to make. DuckDuckGo also has a browser extension which can also block tracking software and report site’s privacy score, among other things.&#xA;&#xA;Finally, I use Firefox with Multi-Account Containers and First-Party Isolation features enabled. These are “block 3rd-party cookies” option on steroids, completely isolating each site so no cross-site tracking is possible. The side effect is disrupting features such as log in via Google or Facebook, comments or likes, and site widgets from 3rd party sites. Equipped with a good password manager (I use 1Password), I find this only mildly annoying.&#xA;&#xA;On mobile, I use Firefox Focus, which behaves like a browser in incognito mode, making it easy to forget all history (including any tracking cookies) with a single tap.&#xA;&#xA;Personal data&#xA;&#xA;The amount of information big internet giants track about us is dwarfed by the amount of data we freely give them: photos, videos, text posts, travel and purchase information, our plans, intentions, fears and desires.&#xA;And for the most part, they can keep this data forever, use it as they like, including giving others access to it. This has been somewhat limited by the European GDPR and the series of privacy scandals involving Facebook intentionally and unintentionally giving others vast amounts of what should’ve been private data. But it is still largely in place for those not inclined to, or not aware that they have the option to, micro-manage what rights over their data they give Facebook and other big companies.&#xA;&#xA;The problem here lies in not seeing through the implications of this. When you tell Facebook (or Google, …) something, it remembers it forever. For instance, that embarrassing photo or status update you hope everyone’s forgotten by now. That awkward private message that you sent as public instead. That photo of you six months old naked in a bathtub that your parents thought was infinitely cute and just had to share publicly at the time.&#xA;&#xA;All of this will be used, to sell you stuff or to make you come back for more. If you get embarrassed, mobbed, fired or worse — hey, you shouldn’t have posted it online.&#xA;&#xA;Which brings me to the best way to minimize this risk: treat everything you post as if you’ve shouted it on prime-time national TV.  If you wouldn’t be comfortable letting the world know about it, don’t put it online.&#xA;&#xA;The only exception to this is email and private messages. Data breaches notwithstanding, these usually come with privacy implied and companies take care to protect these. But even here, it pays to be cautious because your conversation peers might not be.&#xA;&#xA;Another way to ensure your privacy online is respected is to periodically — say, once a year — visit privacy and security settings of the sites you use and verify that all the settings are to your liking. These companies have an annoying habit of changing available privacy controls which then default to something the company finds useful, not what you might’ve wanted.&#xA;&#xA;Data breaches&#xA;&#xA;Massive data breaches, exposing passwords, social security numbers or other private and sensitive information of thousands or even millions of users, are nowadays a common occurrence.&#xA;&#xA;While perfect security is impossible, the fact is that companies are not incentivized to strive for this perfection. One of the larger data breaches, that of up to 40 million credit and debit card details of Target in 2013, cost the company $202 million in total. This is in comparison with $2.4 billion net income for the company in 2017.&#xA;&#xA;The largest data breach in 2018 was that of the Marriot Starwood customers&#39; data, affecting anywhere between 300 and 500 million customers.&#xA;&#xA;Laws like the European General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act are slowly changing things for the better, but there’s still a long road ahead.&#xA;&#xA;Individually, the best protection is following security best practices such as not using the same password on multiple sites, using HTTPS, enabling 2-factor authentication where avilable, using end-to-end encryption for private messaging, and so on. This decreases the problems you have when (not if) one of the sites you visit has a data breach.&#xA;&#xA;Junk media&#xA;&#xA;I use the term “junk media” for content that’s primarily designed to get eyeballs, not provide useful information, be interesting or entertain. A few examples are textual and video content farms, social media feeds optimized for engagement or viral content, or irrelevant “breaking news”. Again, the line here is blurry and everyone will have differing criteria.&#xA;&#xA;Why am I mentioning junk media in a post about staying safe online? Similar to over-sharing of our personal data, this is something we do to ourselves without really thinking about it. Accumulating over the longer term, it can also have negative consequences for us.&#xA;&#xA;Junk media may be “fun” or “interesting” in the sense that we have an instant reaction, just like junk food can be tasty while containing poor nutritional value. In either case, indulging in moderation is not a problem, but a steady diet of either won’t be good for our health.&#xA;&#xA;The problem is that moderation doesn’t maximize revenue. In purely commercial terms, the winning strategy for the media companies is to maximize views and engagement while minimizing churn. The more time we spend on those sites and the more content we consume, comment on or share, the better. The quality of time spent for the consumer is of secondary importance — just good enough to prevent people from leaving.&#xA;&#xA;Junk media is not confined to online. It’s equally present in the press, on the TV and the radio. In the past, there’s been a lot said of negative effects of too much TV. Comparatively little research has been done into negative effects of too much social media.&#xA;&#xA;Not consuming too much junk media is as easy — or as hard — as not overeating junk food: just don’t do it. A more actionable advice is putting it “out of reach” so you won’t unthinkingly reach for it. For example, I open Facebook from an incognito browser and have 2-factor authentication enabled. This forces me to go through multi-step login process each time I want to visit, making it inconvenient enough that I only visit if I really want to. For the same reason I also haven’t installed a Facebook app on my phone — it makes it too convenient to dive back in.&#xA;&#xA;hr&#xA;&#xA;I’ve titled the post “Digital hygiene”. As with the regular form, digital hygiene consists of small things we can do every day that improve our health and minimize health risks.&#xA;&#xA;Starting with the security best practices, thinking about what kind of information we’re sharing (willingly or not) with companies and the larger public and the possible implications down the road, we can change our behavior ever so slightly to minimize the downsides, while still reaping the benefits, of living online.&#xA;&#xA;This post is my attempt to raise your awareness of some of these things, share a few practical tips, and give you some food for thought.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Internet, in 2018, was not a safe place.</p>

<p>By this I don’t mean spam arriving in our inbox, viruses or malware lurking in software downloaded from less-reputable places, or phishing sites masquerading as our favorite e-commerce platforms.</p>

<p>These risks are real, but well understood and widely recognized. However, in the past years there has been an increasing evidence for, and occurrence of, completely different kinds of risk that most of us online are exposed to.</p>

<p>Examples of these are pervasive tracking of behavior online, appropriation of personal data by the apps or sites we use, data breaches, and junk media optimized to maximize engagement.</p>

<p>Before I go over each of these in more detail, a disclaimer: I don’t think everyone’s out to get me, or that big corporations such as Google or Facebook are inherently evil. I do think that companies, big and small, are incentivized to behave in ways that create or increase these risks. That is, the default is to behave in a way that makes things worse.</p>

<h3 id="tracking" id="tracking">Tracking</h3>

<p>Start with tracking. Google and Facebook know every page you visit if it has Facebook or Google login, social or like buttons, embeds fonts or maps, uses Google Analytics or any of their dozens of APIs. So do the ad networks: a handful of major ones are used on most sites, and they track unique users so they can build your profile, optimize ad inventory that you see and <a href="https://instapage.com/blog/what-is-retargeting" rel="nofollow">retarget</a> you. This means they follow you around the internet to show you ads for products you viewed but haven’t bought yet.</p>

<p>Google and Facebook, the portals to the online world for many, know the most about us. But they are not unique in this regard: companies such as Twitter, Amazon and virtually everyone else does it as well.</p>

<p>Is this really a problem? I believe so. I personally don’t like my privacy being violated at will by a random site I happen to visit. On a practical level, I understand that the companies collecting this data aim to maximize their shareholders’ value, not my benefit. While some amount of tracking is acceptable to improve the service I get — and people may have different notion of what’s acceptable to them — there should be a way to draw the line somewhere instead of going full-in.</p>

<p>Tracking can be countered by using an ad blocker, such as <a href="https://en.wikipedia.org/wiki/UBlock_Origin" rel="nofollow">uBlock Origin</a> or <a href="https://adblockplus.org/" rel="nofollow">AdBlock Plus</a>. Today’s ad blockers do more than just block annoying ads: they also disrupt all kinds of invasive tracking, and can be integrated in all modern browsers and mobile devices. This approach does have a side-effect of blocking ads too, depriving sites of revenue. However, at this point I don’t think browsing the web is at all viable without an ad blocker. To put it bluntly, the experience is <em>horrible</em>.</p>

<p>I also use <a href="https://duckduckgo.com/" rel="nofollow">DuckDuckGo</a>, an alternative search engine with a focus on privacy and usability. Its results are usually slightly worse than Google’s, but it does have a few extra tricks up its sleeve (such as direct searching of a specific sites) and it’s easy to fall back to Google, so it’s tradeoff I’m willing to make. DuckDuckGo also has a browser extension which can also block tracking software and report site’s privacy score, among other things.</p>

<p>Finally, I use Firefox with <a href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/" rel="nofollow">Multi-Account Containers</a> and <a href="https://www.ghacks.net/2017/11/22/how-to-enable-first-party-isolation-in-firefox/" rel="nofollow">First-Party Isolation</a> features enabled. These are “block 3rd-party cookies” option on steroids, completely isolating each site so no cross-site tracking is possible. The side effect is disrupting features such as log in via Google or Facebook, comments or likes, and site widgets from 3rd party sites. Equipped with a good password manager (I use 1Password), I find this only mildly annoying.</p>

<p>On mobile, I use <a href="https://www.mozilla.org/en-US/firefox/mobile/" rel="nofollow">Firefox Focus</a>, which behaves like a browser in <em>incognito mode</em>, making it easy to forget all history (including any tracking cookies) with a single tap.</p>

<h3 id="personal-data" id="personal-data">Personal data</h3>

<p>The amount of information big internet giants track about us is dwarfed by the amount of data we freely give them: photos, videos, text posts, travel and purchase information, our plans, intentions, fears and desires.
And for the most part, they can keep this data forever, use it as they like, including giving others access to it. This has been somewhat limited by the European GDPR and the series of privacy scandals involving Facebook intentionally and unintentionally giving others vast amounts of what should’ve been private data. But it is still largely in place for those not inclined to, or not aware that they have the option to, micro-manage what rights over their data they give Facebook and other big companies.</p>

<p>The problem here lies in not seeing through the implications of this. When you tell Facebook (or Google, …) something, <em>it remembers it forever</em>. For instance, that embarrassing photo or status update you hope everyone’s forgotten by now. That awkward private message that you sent as public instead. That photo of you six months old naked in a bathtub that your parents thought was infinitely cute and just had to share publicly at the time.</p>

<p>All of this will be used, to sell you stuff or to make you come back for more. If you get embarrassed, mobbed, fired or worse — hey, you shouldn’t have posted it online.</p>

<p>Which brings me to the best way to minimize this risk: treat everything you post as if you’ve shouted it on prime-time national TV.  If you wouldn’t be comfortable letting the world know about it, don’t put it online.</p>

<p>The only exception to this is email and private messages. Data breaches notwithstanding, these usually come with privacy implied and companies take care to protect these. But even here, it pays to be cautious because your conversation peers might not be.</p>

<p>Another way to ensure your privacy online is respected is to periodically — say, once a year — visit privacy and security settings of the sites you use and verify that all the settings are to your liking. These companies have an annoying habit of changing available privacy controls which then default to something the company finds useful, not what you might’ve wanted.</p>

<h3 id="data-breaches" id="data-breaches">Data breaches</h3>

<p>Massive data breaches, exposing passwords, social security numbers or other private and sensitive information of thousands or even millions of users, are nowadays a <a href="https://www.identityforce.com/blog/2018-data-breaches" rel="nofollow">common occurrence</a>.</p>

<p>While perfect security is impossible, the fact is that companies are not incentivized to strive for this perfection. One of the larger data breaches, that of up to 40 million credit and debit card details of Target in 2013, <a href="http://fortune.com/2017/05/23/target-settlement-data-breach-lawsuits/" rel="nofollow">cost the company $202 million in total</a>. This is in comparison with $2.4 billion net income for the company in 2017.</p>

<p>The largest data breach in 2018 was that of the <a href="https://www.forbes.com/sites/kateoflahertyuk/2018/11/30/marriott-breach-what-happened-how-serious-is-it-and-who-is-impacted/#771b1f397d25" rel="nofollow">Marriot Starwood customers&#39; data</a>, affecting anywhere between 300 and 500 million customers.</p>

<p>Laws like the European <a href="https://en.wikipedia.org/wiki/General_Data_Protection_Regulation" rel="nofollow">General Data Protection Regulation</a> (GDPR) and California’s <a href="https://www.nytimes.com/2018/06/28/technology/california-online-privacy-law.html" rel="nofollow">Consumer Privacy Act</a> are slowly changing things for the better, but there’s still a long road ahead.</p>

<p>Individually, the best protection is following <a href="https://www.owasp.org/index.php/Consumer_Best_Practices" rel="nofollow">security best practices</a> such as not using the same password on multiple sites, using HTTPS, enabling 2-factor authentication where avilable, using end-to-end encryption for private messaging, and so on. This decreases the problems you have when (not if) one of the sites you visit has a data breach.</p>

<h3 id="junk-media" id="junk-media">Junk media</h3>

<p>I use the term “junk media” for content that’s primarily designed to get eyeballs, not provide useful information, be interesting or entertain. A few examples are textual and video content farms, social media feeds optimized for engagement or viral content, or irrelevant “breaking news”. Again, the line here is blurry and everyone will have differing criteria.</p>

<p>Why am I mentioning junk media in a post about staying safe online? Similar to over-sharing of our personal data, this is something we do to ourselves without really thinking about it. Accumulating over the longer term, it can also have negative consequences for us.</p>

<p>Junk media may be “fun” or “interesting” in the sense that we have an instant reaction, just like junk food can be tasty while containing poor nutritional value. In either case, indulging in moderation is not a problem, but a steady diet of either won’t be good for our health.</p>

<p>The problem is that moderation doesn’t maximize revenue. In purely commercial terms, the winning strategy for the media companies is to maximize views and engagement while minimizing churn. The more time we spend on those sites and the more content we consume, comment on or share, the better. The quality of time spent <em>for</em> the consumer is of secondary importance — just good enough to prevent people from leaving.</p>

<p>Junk media is not confined to online. It’s equally present in the press, on the TV and the radio. In the past, there’s been a lot said of negative effects of too much TV. Comparatively little research has been done into negative effects of <a href="http://www.bbc.com/future/story/20180118-how-much-is-too-much-time-on-social-media" rel="nofollow">too much social media</a>.</p>

<p>Not consuming too much junk media is as easy — or as hard — as not overeating junk food: just don’t do it. A more actionable advice is putting it “out of reach” so you won’t unthinkingly reach for it. For example, I open Facebook from an incognito browser and have 2-factor authentication enabled. This forces me to go through multi-step login process each time I want to visit, making it inconvenient enough that I only visit if I really want to. For the same reason I also haven’t installed a Facebook app on my phone — it makes it too convenient to dive back in.</p>

<hr>

<p>I’ve titled the post “Digital hygiene”. As with the regular form, digital hygiene consists of small things we can do every day that improve our health and minimize health risks.</p>

<p>Starting with the security best practices, thinking about what kind of information we’re sharing (willingly or not) with companies and the larger public and the possible implications down the road, we can change our behavior ever so slightly to minimize the downsides, while still reaping the benefits, of living online.</p>

<p>This post is my attempt to raise your awareness of some of these things, share a few practical tips, and give you some food for thought.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/digital-hygiene</guid>
      <pubDate>Sun, 30 Dec 2018 21:23:57 +0000</pubDate>
    </item>
    <item>
      <title>Universal Basic Income and cost of things</title>
      <link>https://blog.senko.net/universal-basic-income-and-cost-of-things?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Dispelling one particular critique of UBI&#xA;&#xA;Universal Basic Income (UBI) has started appearing with increasing regularity in research and experiments all around the world (Finland, India, …Oakland?). Of course, the scheme has both benefits and drawbacks, its proponents and critics, but in the absence of experience from a large-scale long-running UBI program, it is hard to evaluate what would actually happen. !--more--&#xA;&#xA;One popular critique is that giving everyone some amount of money would simply raise the floor for prices. Inflation would do the rest and the scheme would quickly cancel itself out.&#xA;&#xA;This critique is actually easy to dispel, and I believe it’s useful to do so, so we can focus on other actual challenges (of which are many). Here we go:&#xA;&#xA;If everyone gets a fixed sum of extra cash, what’s to stop merchants from raising prices? Other merchants. Consider bakeries. They could raise prices, knowing everyone has extra money to pay for bread. However, all it takes is one savvy baker to recognize he or she could raise the prices just a little lower than everyone else, enough for people to start preferring their shop instead of the competition. The baker could increase their market share significantly (open a chain of “cheap” bakeries). But the competition would quickly catch up to the rascal’s plan and lower their prices accordingly. The baker could lower them still a bit more, and so on…&#xA;&#xA;Where would that downward pressure stop? At the point at which there is no point in running the bakery (the profit is too small). Which is the exact same price point as before[0], and doesn’t depend on the purchasing power of the consumers (that is, it’s not tied to how much money people have).&#xA;&#xA;However, in this description I’ve made two important assumptions: that there is enough competition between bakers, and that bread production can be ramped up and down as demand increases. If either of these assumptions is false, the picture becomes less rosy.&#xA;&#xA;Start with competition. If there is only one baker (a monopoly), he or she is always in the position to charge as much as they like (which is typically just below the point at which many people would stop eating bread and switch to something else, say, rice). In this case, introducing UBI would directly lead to price increase, unless the price itself was regulated (as is the case with utilities, which are natural monopolies).&#xA;&#xA;The other assumption is that the production can adapt to the demand. When this is not the case, that is, when the supply is limited, the competition between consumers for a limited number of products, will almost certainly gobble up any extra money people receive. Spectacular example of this is the housing market in Silicon Valley, where an ever increasing number of IT workers with sky-high salaries competes for a very limited amount of housing.&#xA;&#xA;An even better example are tuitions for prestige universities in the US. Since it is not in the universities’ interest to increase number of students, increasing the money supply for the prospective students with student loans meant that students were now able to pay more for the same thing and that universities could simply increase the tuition fee[1]. Increasing the money supply to students via UBI would have the same effect.&#xA;&#xA;Coming back to validity of the critique that UBI would simply result in price increases, we can see that it rests on the question of whether people spend more money for commodity products, or on limited-supply products or monopolies.&#xA;&#xA;The recent stats from US Bureau of Labor Statistics[2] show that roughly a third of the expenses are housing related. To me, this shows that those in very skewed housing markets (like Silicon Valley, New York, or London) might see price increases due to UBI, but for the most people (that live in healthier housing markets) the housing cost shouldn’t be affected. Other costs are related to more commoditized goods and services so they should be even less affected.&#xA;&#xA;This doesn’t mean that Universal Basic Income is definitely a net benefit for society. There are many other issues to examine, challenges to be sorted out, and the jury will be out on its effects for a long time.&#xA;&#xA;But at least we’ve got one out of the way.&#xA;&#xA;hr&#xA;&#xA;[0] Actually, it could be even lower. If UBI replaces minimum wage, workers may decide they’re willing to work for a little less, thereby reducing the cost of bread. I’m not an economist, statistician, or a social scientist so I will not venture into discussion on whether that’d be a good thing overall.&#xA;&#xA;[1] That’s not to say student loans weren’t beneficial overall. It may very well be that the system allowed more students to attend the universities as not all schools’ prices hiked (and not nearly by the same amount as the top ones), and allowed more middle-class and poorer students the opportunity. I know too little about the matter to draw any conclusions either way.&#xA;&#xA;[2] I imagine stats for other western countries would show qualitatively similar amounts.]]&gt;</description>
      <content:encoded><![CDATA[<p><em>Dispelling one particular critique of UBI</em></p>

<p><a href="https://en.wikipedia.org/wiki/Basic_income" rel="nofollow">Universal Basic Income</a> (UBI) has started appearing with increasing regularity in research and experiments all around the world (<a href="http://www.businessinsider.com/finlands-biggest-trade-union-a-universal-basic-income-is-useless-2017-2" rel="nofollow">Finland</a>, <a href="http://blogs.wsj.com/indiarealtime/2017/01/31/india-considers-fighting-poverty-with-a-universal-basic-income/" rel="nofollow">India</a>, …<a href="https://blog.ycombinator.com/moving-forward-on-basic-income/" rel="nofollow">Oakland</a>?). Of course, the scheme has both benefits and drawbacks, its proponents and critics, but in the absence of experience from a large-scale long-running UBI program, it is hard to evaluate what would <em>actually</em> happen. </p>

<p>One popular critique is that giving everyone some amount of money would simply raise the floor for prices. Inflation would do the rest and the scheme would quickly cancel itself out.</p>

<p>This critique is actually easy to dispel, and I believe it’s useful to do so, so we can focus on other actual challenges (of which are many). Here we go:</p>

<p>If everyone gets a fixed sum of extra cash, what’s to stop merchants from raising prices? Other merchants. Consider bakeries. They could raise prices, knowing everyone has extra money to pay for bread. However, all it takes is one savvy baker to recognize he or she could raise the prices <em>just a little lower</em> than everyone else, enough for people to start preferring their shop instead of the competition. The baker could increase their market share significantly (open a chain of “cheap” bakeries). But the competition would quickly catch up to the rascal’s plan and lower their prices accordingly. The baker could lower them still a bit more, and so on…</p>

<p>Where would that downward pressure stop? At the point at which there is no point in running the bakery (the profit is too small). Which is the exact same price point as before[0], and doesn’t depend on the purchasing power of the consumers (that is, it’s not tied to how much money people have).</p>

<p>However, in this description I’ve made two important assumptions: that there is enough competition between bakers, and that bread production can be ramped up and down as demand increases. If either of these assumptions is false, the picture becomes less rosy.</p>

<p>Start with competition. If there is only one baker (a monopoly), he or she is always in the position to charge as much as they like (which is typically just below the point at which many people would stop eating bread and switch to something else, say, rice). In this case, introducing UBI would directly lead to price increase, unless the price itself was regulated (as is the case with utilities, which are natural monopolies).</p>

<p>The other assumption is that the production can adapt to the demand. When this is not the case, that is, when the supply is limited, <em>the competition</em> between consumers for a limited number of products, will almost certainly gobble up any extra money people receive. Spectacular example of this is the housing market in Silicon Valley, where an ever increasing number of IT workers with sky-high salaries competes for a very limited amount of housing.</p>

<p>An even better example are tuitions for prestige universities in the US. Since it is not in the universities’ interest to increase number of students, increasing the money supply for the prospective students with student loans meant that students were now able to pay more for the same thing and that universities could simply increase the tuition fee[1]. Increasing the money supply to students via UBI would have the same effect.</p>

<p>Coming back to validity of the critique that UBI would simply result in price increases, we can see that it rests on the question of whether people spend more money for commodity products, or on limited-supply products or monopolies.</p>

<p>The recent stats from <a href="https://www.bls.gov/news.release/pdf/cesan.pdf" rel="nofollow">US Bureau of Labor Statistics</a>[2] show that roughly a third of the expenses are housing related. To me, this shows that those in very skewed housing markets (like Silicon Valley, New York, or London) might see price increases due to UBI, but for the most people (that live in healthier housing markets) the housing cost shouldn’t be affected. Other costs are related to more commoditized goods and services so they should be even less affected.</p>

<p>This doesn’t mean that Universal Basic Income is definitely a net benefit for society. There are many other issues to examine, challenges to be sorted out, and the jury will be out on its effects for a long time.</p>

<p>But at least we’ve got one out of the way.</p>

<hr>

<p>[0] Actually, it could be even lower. If UBI replaces minimum wage, workers may decide they’re willing to work for a little less, thereby reducing the cost of bread. I’m not an economist, statistician, or a social scientist so I will not venture into discussion on whether that’d be a good thing overall.</p>

<p>[1] That’s not to say student loans weren’t beneficial overall. It may very well be that the system allowed more students to attend the universities as not all schools’ prices hiked (and not nearly by the same amount as the top ones), and allowed more middle-class and poorer students the opportunity. I know too little about the matter to draw any conclusions either way.</p>

<p>[2] I imagine stats for other western countries would show qualitatively similar amounts.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/universal-basic-income-and-cost-of-things</guid>
      <pubDate>Sat, 11 Feb 2017 11:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Hear no evil</title>
      <link>https://blog.senko.net/hear-no-evil?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Voice-controlled AI assistants are advanced enough to be dangerous&#xA;&#xA;Useful voice recognition, combined with AI capable of parsing specific phrases and sentences, is finally here. Amazon’s Alexa, Apple’s Siri and Google’s Assistant are showing us what the future will be like. !--more--&#xA;&#xA;However, the safeguards are lagging behind the capabilities, as the recent example of a TV anchor ordering dollhouses shows. The fact that the system picked up voice from the TV and interpreted it as a command sounds funny, but should be terrifying to anyone remotely interested in computer security. It sounds like a Hollywood adaptation of the classic remote code execution bug — but it’s not a fantasy any more.&#xA;&#xA;We’re so happy that we have machines that can listen to us, that in our rush to use / buy / create them, we haven’t stopped and made sure they listen only to us. That’s why a kid can order a dollhouse while parents are asleep or away, TV anchor reporting on that can order hundreds more, and we can play fun pranks when visiting friends by ordering tons of toilet paper while they’re not looking :-)&#xA;&#xA;Accidentally ordering something online can be terribly inconvenient and cost you a fine buck, but as these assistants get control over more devices in our homes and our lives (IoT anyone?), we’ll start seeing real problems. Here’s a stupid trick that might just work in a year of so: Alexa, unlock the front door!&#xA;&#xA;Mobile phone voice assistants show one way of handling this: by requiring the phone to be unlocked for (most) commands to work. Yet while may make sense for phones (and only slightly inconvenience the user), it’s a non-starter for home automation systems. If I have to walk over and press a button, I might just as well do the entire action (such as turning the light off, or unlocking the door) myself.&#xA;&#xA;Another possibility is speaker recognition. By analyzing how the words are uttered, not just what they are, such systems can distinguish voice of the authorized user. However, like many other biometric systems, it is easily fooled by a facsimile of the user — in this case, a simple recording. Thus anyone with a mobile phone can “hack” this kind of security.&#xA;&#xA;More effective, and only slightly more inconvenient, would be the combination of requiring the physical presence of the user in the room (for example, by sensing their mobile phone, smartwatch, or other personal item they’d carry around most of the time) and speaker recognition. In this case, even if a hack is attempted, the user themselves would be around to prevent it.&#xA;&#xA;So the good news is, it shouldn’t be that hard to build more secure voice-controlled systems. The bad news is, as we’ve seen with huge botnets made of compromised IoT devices, many companies in home automation space currently have no experience or incentives to focus more on security.&#xA;&#xA;Voice-controlled AI assistants are here to stay, and it’s a good thing — they’re mightily convenient. But expect more fun anecdotes and scary stories in the years ahead.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><em>Voice-controlled AI assistants are advanced enough to be dangerous</em></p>

<p>Useful voice recognition, combined with AI capable of parsing specific phrases and sentences, is finally here. Amazon’s Alexa, Apple’s Siri and Google’s Assistant are showing us what the future will be like. </p>

<p>However, the safeguards are lagging behind the capabilities, as the recent example of <a href="http://www.cw6sandiego.com/news-anchor-sets-off-alexa-devices-around-san-diego-ordering-unwanted-dollhouses/" rel="nofollow">a TV anchor ordering dollhouses shows</a>. The fact that the system picked up voice from the TV and interpreted it as a command sounds funny, but should be <em>terrifying</em> to anyone remotely interested in computer security. It sounds like a Hollywood adaptation of the <a href="https://en.wikipedia.org/wiki/Arbitrary_code_execution" rel="nofollow">classic remote code execution bug</a> — but it’s not a fantasy any more.</p>

<p>We’re so happy that we have machines that can listen to us, that in our rush to use / buy / create them, we haven’t stopped and made sure they listen only to us. That’s why a kid can order a dollhouse while parents are asleep or away, TV anchor reporting on that can order hundreds more, and we can play fun pranks when visiting friends by ordering tons of toilet paper while they’re not looking :–)</p>

<p>Accidentally ordering something online can be terribly inconvenient and cost you a fine buck, but as these assistants get control over more devices in our homes and our lives (IoT anyone?), we’ll start seeing real problems. Here’s a stupid trick that might just work in a year of so: <a href="https://ifttt.com/applets/342970p-alexa-tells-smartthings-to-unlock-front-door-i-use-the-phrase-unlatch-front-door-lock-updated" rel="nofollow">Alexa, unlock the front door!</a></p>

<p>Mobile phone voice assistants show one way of handling this: by requiring the phone to be unlocked for (most) commands to work. Yet while may make sense for phones (and only slightly inconvenience the user), it’s a non-starter for home automation systems. If I have to walk over and press a button, I might just as well do the entire action (such as turning the light off, or unlocking the door) myself.</p>

<p>Another possibility is <a href="https://en.wikipedia.org/wiki/Speaker_recognition" rel="nofollow">speaker recognition</a>. By analyzing how the words are uttered, not just what they are, such systems can distinguish voice of the authorized user. However, like many other biometric systems, it is easily fooled by a facsimile of the user — in this case, a simple recording. Thus anyone with a mobile phone can “hack” this kind of security.</p>

<p>More effective, and only slightly more inconvenient, would be the combination of requiring the physical presence of the user in the room (for example, by sensing their mobile phone, smartwatch, or other personal item they’d carry around most of the time) and speaker recognition. In this case, even if a hack is attempted, the user themselves would be around to prevent it.</p>

<p>So the good news is, it shouldn’t be <em>that</em> hard to build more secure voice-controlled systems. The bad news is, as we’ve seen with <a href="https://en.wikipedia.org/wiki/Mirai_%28malware%29" rel="nofollow">huge botnets made of compromised IoT devices</a>, many companies in home automation space currently have no experience or incentives to focus more on security.</p>

<p>Voice-controlled AI assistants are here to stay, and it’s a good thing — they’re mightily convenient. But expect more fun anecdotes and scary stories in the years ahead.</p>
]]></content:encoded>
      <guid>https://blog.senko.net/hear-no-evil</guid>
      <pubDate>Sat, 07 Jan 2017 11:00:00 +0000</pubDate>
    </item>
  </channel>
</rss>