What music, no matter when you play it or whatever mood you are in, always transports you back to a happy time or happy moment in your life?
I found myself pondering this last week when I listened to The Seer (1986) by Big Country.
It’s 1986 and I’m at my friend James’s house.
“You’ve got to listen to this,” said James reaching for his new 12″ album.
The needle came down on the vinyl and we listened to the anticipatory crackles and pops as it wound its way to the opening track.
This time we run This time we hide This time we draw on all the fire we have inside. We need some time To find a place Where I can wipe away the madness from your face.
Lyrics from “Look away” by Big Country
We sat in almost silence for the next 50 minutes and 30 seconds as this Celtic rock washed over us. It was heavy, it was delicate, it was rousing and beautifully sweet in equal measure.
This was one of those moments of simple contentedness and the simplicity of sitting in the presence of a best friend.
If I remember correctly, James was made homeless that year—or maybe the next. He moved into the spare room of someone from church. This was one of the first albums we listened to on my first visit to his new home.
This album always reminds me of our friendship. The closeness we had. Both the fun and the laughter during these formative teenage years and the moments of sitting in silence with one another listening to music—Big Country, Sting, Jean Michel-Jarre, Guns n’ Roses—letting the music and lyrics change our view of the world.
I know the weary can rise again I know it all from the words you send
Lyrics from “Remembrance Day” by Big Country
I’ve not seen James in many years—he eventually moved to live in Sweden—but this album reminds me of him every time I play it.
I’ve started using NAPS2 to convert paper documents to PDF to store in Dropbox or Microsoft OneNote as part of my paperless(-ish) office approach to productivity.
Predictions about the paperless office have been circulating for over 40 years now. And yet here I am in 2018 sitting next to a four-drawer filing cabinet containing letters and documents about everything from my house rental and utility bills to health records, university qualifications, and work-related documents.
A couple of years ago I decided to try to keep an electronic copy of my most important (or frequently used) documents and after comparing the relative benefits of Dropbox, Google Drive, Evernote and Microsoft OneNote, I finally settled on OneNote (with Dropbox as a backup in some cases) and started scanning.
OneNote stores its files in OneDrive, which I wasn’t using for much else—and given that I subscribe to Microsoft Office 365 I have about 1 TB of cloud space* at my disposal.
[* Disclaimer: There is no such thing as the cloud, it’s just someone else’s computer.]
I like OneNote because:
I can view the PDF on the page, I don’t have to wait for it to open in Acrobat Reader.
The documents synchronise between my desktop PC, laptop, tablet and mobile phone, so I can access them wherever I am and away from home.
I can annotate and highlight the document using the draw functionality of OneNote.
I can type notes on the same page, which are searchable.
OneNote has built-in OCR (optical character recognition) capabilities which means I can right-click the PDF print-out embedded within OneNote and extract editable text from the document to the clipboard to be used elsewhere—that can save a lot of typing.
I’m fortunate to have an Epson flatbed scanner on my desk. It came bundled with among other things the Epson Copy Utility which allows me to use the scanner with a printer (or PDF writer) much like a photocopier.
But recently I’ve found the Epson Copy Utility to be increasingly unstable. Often, midway through scanning a document the application will crash and tie up the scanner requiring me to either hunt down the processes to cancel in Task Manager or reboot the PC, which is often quicker. Though, to be fair, the application is over 11 years old and is a 32-bit application running on a 64-bit system.
Hunting around for an alternative, I discovered NAPS2—Not Another PDF Scanner 2, which is also an open source project, which I wholeheartedly support. So far, the results have been superb and I haven’t lost a single document yet
For those who understand this sort of thing, NAPS2 supports both WAI and Twain. It allows you to reorder the scanned pages. It will save to PDFor image (it supports multiple formats including bmp, gif, jpeg, png and tiff). It supports built-in OCR. Or you can simply print the document—including send to OneNote straight from NAPS2.
My experiences so far
Having been trying to live a more paperless office experience for over a year now, I can’t see me wanting to give up my filing cabinet anytime soon (there are still some documents that I would want to keep in paper format) but this has certainly enhanced my productivity.
Before I started scanning, I decided on a document structure within OneNote. I store all my documents within the same notebook but in different groups and sections. I try to keep these as consistent as I can with how I have organised my filing cabinet, which helps me locate the hard copy when I need to. And I adapt and extend the structure when it seems sensible for me to do so.
When I started scanning documents to PDF and embedding them within OneNote, I didn’t simply start at the front of my filing cabinet and work my way through. Instead, I prioritised those documents I thought I might need most often. Whenever I am out and realise that document X or Y would be useful in OneNote, I add a task to Todoist to scan it when I get home.
What I should maybe do next is then use this as the basis for determining which documents to recycle or shred from my filing cabinet.
Having my key documents available wherever I am has been invaluable. Hurray for mobile phones, OneNote for Android and 4G network connections.
Overall, while there is a little overhead in sitting scanning documents as soon as they arrive—although many companies like insurance and utility companies now use PDFs via email as their primary documentation—I have found this approach to be entirely worthwhile. It keeps all my documents together, I can access them whenever and wherever I need them and I feel much more organised as a result.
I am currently learning Russian and reminding myself of the integral importance that failure has in the learning process.
I visited the USSR in 1988 as part of a modern studies high school trip to Moscow and Leningrad (now St Petersburg). You can see the photos from my trip on Flickr.
In late 1987 I started a short course to learn some Russian phrases. I didn’t get much further than Что это (what is it?) and это стол и стул (it’s a table and chair) before I gave up. Still, at least learning the Cyrillic alphabet helped me read signs as we travelled around this other-worldly country that was then still behind the Iron Curtain.
Since then, however, I have always wanted to complete the course and learn Russian for nothing other than the academic satisfaction. Plus, obviously, if Russia is to continue to interfere in national politics and influence elections it would be useful to be able to communicate with our eventual overlords in their own tongue.
Thirty years on and I still haven’t learned the language. Which is why, last week I decided that now was the right time. I realised that there would never be a perfect time. I would never have a free six months to devote to the task. If I wanted to do it then I would just have to start now and squeeze it into my daily schedule—five minutes here, ten minutes there.
Why did I give up so easily?
But why did I give up so soon after starting to learn?
There are likely to be a few practical reasons, not least energy levels, volume of school work, and family dynamics (my dad was suffering from brain damage by that point).
Surely, it can’t all have come down to time or motivation. Back in 1987/1988 I had all the time in the world—besides school I had few other commitments. And I had the motivation—I would be visiting Russia during the Easter break in April 1988.
But I still gave up. Why? This puzzled me for a long time, until I found the answer in a couple of books about parenting.
I think the problem was that for as long as I remember I had been told that I was clever, and learning Russian is hard—I gave up, I reckon, because it challenged my self-perception as a clever boy.
During the late 1990s a couple of psychologists (Claudia Mueller and Carol Dweck) from Columbia University, New York ran a series of experiments with children, during which one group of children were praised for being clever while the other group was praised for the effort they put in (regardless of whether they got the answers right or wrong).
What they discovered was that children who were told they were really bright after completing one set of tasks were then less likely to exert themselves when presented with a choice of further tasks. While the children who had been praised for the effort they put in during the first task were far more likely to opt for a more difficult second task.
Telling a child they are intelligent might make them feel good, but [it] can also induce a fear of failure, causing the child to avoid challenging situations because they might look bad if they are not successful. In addition, telling a child they are intelligent suggests they do not need to work hard to perform well. Because of this, children may be less motivated to make the required effort and be more likely to fail.
It turns out that children who are frequently praised tend to become more competitive and more interested in belittling others. Their primary interest becomes image-maintenance—having been told they are clever, they want to continue to be seen to be clever even if that means pulling others down around them.
Looking back at my childhood and teenage years, I don’t recognise that last aspect of tearing others down but I wholeheartedly recognise the image-maintenance part—I would joke years later that I simply dropped those subjects that I didn’t do well in, not realising at the time that I did this because they clashed with the self-image that I had been developing and which was being built-up by folks telling me that I was clever.
I liked being clever. I didn’t like doing things that didn’t make me feel clever. It makes perfect sense. But I wonder what I missed in giving up things too soon. I wonder what would have happened if instead I had been praised for my effort and dug in deep at times.
How can students succeed if they are not taught to fail?
While I was working as the warden in a university halls of residence, I would frequently have conversations with students about the importance of failure.
Here we had, arguably, some of the brightest young people in the country who had progressed from success to success to become, in many cases, the brightest in their school. And then when they arrived at St Andrews among other similar youngsters they found themselves to be decidedly average.
That took them quite by surprise. And coupled with a different style of learning at university and an increased workload many found themselves not hitting their usual 100% expectations.
To many it felt like the sky was falling in: their world was collapsing and their self-image was being shaken at a fundamental level.
In my first year at St Andrews, I would tell them, I failed two-thirds of my course. Two-thirds! I passed divinity but failed Old Testament and ecclesiastical history; I managed to progress to second year by the skin of my teeth. But that experience changed me—it helped me to understand how I work best. It helped me to understand what works for me, and what doesn’t. In the end, I graduated with a 2:1 honours degree that I was delighted with.
It is okay to fail
This paragraph from an article on @TeacherToolkit that I read last year resonated with me:
In recent years there seems to be an accepted fallacy that learning happens in a linear fashion, with educators setting up opportunities for children to jump from success to success without ever encountering failure. However, if this is the case, to what extent are your pupils simply working as opposed to learning?
They suggest incorporating failure in the learning process. This is their list of suggestions:
Provide the children with the toolkit to cope with failure.
Praise the children’s best efforts and show them how to move their learning forward.
Develop an ethos where the children are not afraid to fail and develop strategies to overcome challenge.
Don’t hide mistakes from children. Adults make mistakes all the time, but children seldom are afforded the opportunity of witnessing this.
Make teaching points of your mistakes and model how to deal appropriately with failure.
Pupils should have the confidence to attempt new activities in a safe and secure environment knowing that failure will be met with encouragement and support. Failure isn’t something to be feared, but rather is part of the learning process which should be embraced.
Children need to know that it is okay to fail and it is the trying again that is important, this is how children succeed.
But it’s not just children and university students who need to learn the importance of failure. For the last few years I was working as an agile project manger in a web development team—”fail fast” is something we used to advertise as one of the benefits of working in an agile manner.
I’m delighted to see Karl Scotland (from whose writings I have learned a lot over the years) is running a session at this week’s Lean Agile Scotland event in Edinburgh entitled “Failure is not an option”.
That’s right, failure is not an option—it is a necessity.
For many organisations, failure is something to be avoided. Poor results are frowned upon; people don’t take risks, and they hide undesirable results for fear of being blamed. But it’s these failures that generate new information from which we can learn, and this learning is what leads to organisational improvement and long-term success. This session will explore why failure is not an option, but a necessity, and how we can make failure a friend and not a foe. Karl Scotland “Don’t bury your failures. Let them inspire you.”
I really like that quotation: don’t bury your failures, let them inspire you.
There is something here to inspire me as I try to remember what этот (this), он (he), она (she) and оно (it) mean in Russian; as I try to encourage Joshua to do his French horn practice—”you’re trying really hard to play the right notes, well done” rather than “you’re so good at that”; and as I reflect on my last twenty years of work and try to make sense of what my strengths are, what weaknesses I need to work on and where I should put my energy next.
Over the last couple of months I’ve been considering buying a TV to also use as a PC monitor. I’ve been surprised to find relatively very little information online about it so here’s what I’ve discovered and my experiences so far.
My experience has been great, so far.
You don’t have a telly?!
I’ve often been amazed by people’s reactions whenever I’ve told them that I don’t own a TV.
“You don’t have a telly?!”
Until recently I’ve not felt that I’ve really need one. I do have a TV licence as you still need one to watch shows on the BBC iPlayer but I watch those on my PC or Android tablet.
This is fine on my own, but watching films with my boys has been tricky. Most often we’ve been lying down, scrunched up, watching my laptop at the foot of the bed.
Moving out of hall into my own wee house added new dynamics, so I finally gave in and decided to buy a TV.
As I don’t have much space in my house and wouldn’t have anywhere to put it other than on my desk, I knew that it would have to double as a PC monitor.
So I knew my research had to cover two areas:
4K Ultra HD televisions
Put simply, a graphics card is the technology inside a computer that creates the images to output to a screen. Generally speaking, the better the card, the better quality the images and the higher resolution it can handle. This is especially true if you play computer games. The better the graphics card, the more detail you see and the smoother the games appear to play.
I had quite an old graphics card, an NVIDIA GeForce GTX 660 (2 GB) which at the time I bought it was only a few notches down from the top-level gaming cards available. It used a PCI Express 3.0 slot, had two GB of RAM and supported a maximum digital resolution of 4K (4096 × 2160) which was fine for my dual-HD monitors setup (3840 × 1080). It served me well for four or five years and was exactly what I needed then.
But it was beginning to struggle with a few of the more modern games. Forza Horizons 3, for example, we were running on the lowest specs available within the game and even then it was complaining that the graphics card wasn’t coping.
This article from PC Gamer was very helpful in helping me identify the kind of graphics card to upgrade to:
You can use any TV with HDMI inputs in place of a computer display. If you’re looking at 4K TVs, you’ll want a graphics card with at least an HDMI 2.0 port (HDMI 2.0a or later for HDR10 displays). That allows for 4K at 60Hz, with 24-bit color. For GPUs, all the Nvidia 10-series parts and the GTX 950/960 can do this, and AMD’s RX Vega 56/64 and 500-series cards also support HDMI 2.0 or later.
After a little research, I upgraded to an NVIDIA GeForce GTX 1060 with 6 GB of RAM. Another PCI Express 3.0 card but this time with an overclocked processor (so that it runs faster), three times as much memory, and support for 8K resolution (7680 × 4320 pixels). It also had the outputs that I needed (DVI and HDMI), giving me plenty of options.
That should do the job.
Top tip: I used PC Part Picker to identify graphics cards that would be compatible with my motherboard.
It took me about 10 minutes to remove the old card and install the new. I then downloaded and installed the latest drivers.
What a difference it made. On my 24″ HD monitor, I was able to run Forza Horizons with everything cranked up to high or ultra!
Time to choose a television.
4K Ultra HD TVs
While researching whether you could feasibly use a 4K TV as a monitor, besides the advice about using HDMI 2.0a, I discovered two further considerations:
Chroma subsampling 4:4:4
One of the biggest differences between traditional PC monitors and televisions is input lag. This makes sense as the only input a TV generally needs to consider most of the time is from the remote control, and most of us will happily wait for a second or two while the channel changes.
The only other time that most TVs need to consider input lag is when you plug a games console into it. When you tap left on your gamepad stick you want your character to move left immediately.
As a lot of modern TVs process the video signal to make the action in films look smoother, you need to switch this off when plugging in a games console or computer. To enable this, most TV manufacturers now include a ‘game’ mode.
This is important when connecting a PC to a TV because without game mode enabled even simple things like dragging your mouse across the screen shows a noticeable lag: you move the mouse—there is a slight delay—the pointer eases itself across the screen. It doesn’t take too long for this to get annoying.
4:4:4 chroma subsampling
Because TVs are designed for watching fast-action images they are not so good at displaying the sharp text that you might want to manipulate while using a computer.
So if you want to use a 4K TV as a monitor, we need to make sure it is capable of displaying sharp text. TVs manage this using something called ‘chroma subsampling’, although most of the documentation and specifications of TVs—disappointingly—won’t call it this. You may have to do some digging around in user manuals.
This article “chroma subsampling 4:4:4 vs 4:2:2 vs 4:2:0” explains the importance of 4:4:4 chroma subsampling better than I ever could, but the thing to remember if you want to use a 4K (ultra HD) TV as a monitor is that it must support 4:4:4 chroma subsampling.
For moving images such as TV shows and movies this isn’t critical, 4:2:2 and 4:2:0 do just fine, but for displaying crisp text—which most of us use our PCs for manipulating—it is essential that you have a screen that supports 4:4:4.
I had to do a lot of detective work to figure this out for most of the TV models that I looked at. It is rarely included in the at-a-glance specifications for each TV. I often had to find the model on the manufacturers’ websites, download the manual and search through it.
It turns out that most modern television sets will support 4:4:4. After a lot of reading, I discovered that each manufacturer has a particular way of describing it.
Sony TVs call it “HDMI enhanced format” and require you to set the picture mode to “graphics”.
Samsung TVs call it “HDMI UHD color” and require you to select “PC” mode.
LG TVs called it “HDMI ULTRA HD deep color” and require you to set the picture mode to “game”.
So far, my experience with the TV as a monitor has been great.
Picture mode is set to “Game” and “HDMI ULTRA HD deep color” is set to on for the input to ensure 4:4:4 chroma subsampling.
Within Windows’ display settings, I also have the HDR (high dynamic range) and WCG (wide colour gamut) setting set to on.
Forty-three inches is a good size for my desk. If anything, it is a little large and a curved model may have been better (though I can’t see any smaller than 49″).
Picture quality has been superb. For writing and reading text documents and browsing I have no issues.
There is no discernible input lag on any of the games we’ve played (mostly LEGO, Call of Duty Black Ops, Fortnite, Overwatch and Forza).
As indicated above, with game mode enabled there is little noticeable lag when moving the mouse. Until the recent WebOS update on the TV (v4.10.04), sticking the TV into any other mode (eco, sports, vivid, etc.) made it look like the mouse was slowly gliding through water; the latest update seems to have reduced the latency on other picture modes.
And with the NVIDIA GeForce GTX 1060 pushing the graphics the quality has been wonderful, with every game on high and/or ultra settings.
I would definitely recommend making the switch from HD monitors to a 4K ultra HD TV.
A couple of weeks ago I sent a bunch of video cassettes to Digital Converters to be converted to a digital format that I could view and edit on my PC.
Among the cassettes was one featuring this episode of Highway featuring my mum and dad.
Highway, presented by Sir Harry Secombe, was a British TV series that was broadcast between 1983 and 1993 and produced by Tyne Tees Television in Newcastle upon Tyne. It was a religious broadcast that featured religious songs, readings and interviews with people about their faith, lifestyle and how they feel God has been at work in their lives.
I can’t remember when this was broadcast—1988 or 1989 maybe? (I’ll have to ask Mum.) After his haemorrhages he had a portion of his skull removed as it had become badly infected and a couple of years later was replaced with a plastic plate wired in with titanium. After removing the portion of skull, it left an indentation that was large enough for Dad to fit his whole fist into. This broadcast was clearly after the restorative surgery, but you can still clearly see the scar down the middle of his forehead.
During the interview with Sir Harry, Dad spoke about how he encountered God after having a triple subarachnoid brain haemorrhage in early 1983. You hear how his voice still stumbles over some words in the video.
This is one of only three recordings that I have of my dad who died in January 1998.