Afleveringen

  • Amit Gupta is the founder and CEO of Acubed.IT, which is a company which creates innovative and secure cross-security domain solutions for customers such as the UK government. One of their key innovations is the Cross Domain Hybrid Application (CDHA) framework, and which aims to break down the barriers in sharing trusted information across multiple partner agencies.

  • Please excuse the poor quality of my microphone, as the wrong microphone was selected.

    In research, we are all just building on the shoulders of true giants, and there are few larger giants than Leslie Lamport — the creator of LaTeX.

    For me, every time I open up a LaTeX document, I think of the work he did on creating LaTeX, and which makes my research work so much more productive. If I was still stuck with Microsoft Office for research, I would spend half of my time in that horrible equation editor, or in trying to integrate the references into the required format, or in formatting Header 1 and Header 2 to have a six-point spacing underneath. So, for me, the contest between LaTeX and Microsoft Word is a knock-out in the first round.

    And one of the great things about Leslie is that his work is strongly academic — and which provides foundations for others to build on. For this, he did a great deal on the ordering of task synchronisation, in state theory, cryptography signatures, and fault tolerance.

    LaTeX
    I really can say enough about how much LaTeX — created in 1984 — helps my work. I am writing a few books just now, and it allows me to lay out the books in the way that I want to deliver the content. There’s no need for a further mark-up, as I work on the output that the reader will see. But the true genius of LaTeX is the way that teams can work on a paper, and where there can be async to GitHub and where version control is then embedded.

    Clocks
    Many in the research community think that the quality measure of a paper is the impact factor of the journal that it is submitted to, or in the amount of maths that it contains. But, in the end, it is the impact of the paper, and how it changes thinking. For Leslie, in 1978, his paper on clocks changed our scientific world and is one of the most cited papers in computer science.


    Byzantine Generals Problem
    In 1981, Leslie B Lamport defined the Byzantine Generals Problem. And in a research world where you can have 100s of references in a paper, Leslie only used four (and which would probably not be accepted these days for having so few references). Within this paper, the generals of a Byzantine army have to agree to their battle plan, in the face of adversaries passing in order information. In the end, we aim to create a way of passing messages where if at least two out of three of the generals are honest, we will end up with the correct battle plan.

    The Lamport Signature
    Sometime soon, we perhaps need to wean ourselves of our existing public key methods and look to techniques that are more challenging for quantum computers. With the implementation of Shor’s algorithm [here] on quantum computers, we will see our RSA and Elliptic Curve methods being replaced by methods which are quantum robust. One method is the Lamport signature method and which was created by Leslie B. Lamport in 1979.

  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • Daniel J Bernstein (djb) was born in 1971. He is a USA/German citizen and a Personal Professor at Eindhoven University of Technology and a Research Professor at the University of Illinois at Chicago.

    At the tender age of 24 — in 1995 — he, along with the Electronic Frontier Foundation — brought a case against the US Government related to the protection of free speech (Bernstein v. United States: here). It resulted in a ruling that software should be included in the First Amendment. A core contribution is that it has reduced government regulations around cryptography. It was a sign of the greatness that was to come from the amazing mind of Daniel. His viewpoint on reducing the strength of cryptography at the time defined:

    “There are, fortunately, not many terrorists in the world. But there are many criminals exploiting Internet vulnerabilities for economic gain. They infiltrate computers and steal whatever secrets they can find, from individual credit-card numbers to corporate business plans. There are also quite a few vandals causing trouble just for fun.”

    Since then few others have done so much for the cause of privacy, including creating the Sala20 [link] stream cipher in 2005, and then with ChaCha20 [link] and Poly1305 in 2008. Many connections in TLS now use ChaCha20, rather than AES, as it is faster — over three times after than AES — and has a lower computing requirement. His love of using dance names also comes to the fore with Rumba [here].

    It is not just in symmetric key encryption that he has contributed to, he has made significant contributions to public key encryption. In 2005, he defined the Curve 25519 elliptic curve, and which is now a fairly standard way of defining elliptic curves. For signatures, he then defined Ed25519, and the resultant version of a new EdDSA signature (and which is now included in OpenSSH). The Tor protocol, for example, uses Curve 25519 for its key exchange for each of the nodes involved in a secure route.

    He defined the SPHINCS+ method for PQC digital signatures. This is one of the NIST approved methods for quantum robust signatures.

    In 2015, Daniel defined the methods that the NSA may have used to compromise the NIST defined elliptic curves [paper]. And 2005, it was Daniel again who introduced a new type of attack [here].

    Daniel run his Web site from https://cr.yp.to

    More details: https://medium.com/asecuritysite-when-bob-met-alice/a-lifetime-dedicated-to-citizens-rights-to-privacy-daniel-j-bernstein-ab5ab2bf0dc6

  • Jan is the CTO and a Cryptographer at DFINITY, and, since 1998, he has consistently produced research outputs of rigour, novelty and sheer brilliance [here]. He was recently awarded the Levchin Prize at Real World Crypto 2024 - along with Anna Lysyanskaya.

    Jan’s research core happened when he was hosted in the IBM Zurich Research Lab, but has since moved to DFINITY, and is still producing research outputs that are some of the best in the whole of the computer science research area.

    He has published over 140 widely cited papers and has been granted around 140 patents. Jan has also received the ACM SIGSCA Outstanding Innovation Award and the IEEE Computer Society Technical Achievement Award.

    One of his key research outputs relates to the CL signature, which allows for a private, aware digital signature, along with many other contributions, such as range proofs, oblivious transfer, and privacy-aware identity mapping between domains.

    More details here: https://medium.com/asecuritysite-when-bob-met-alice/the-mighty-jan-cryptographic-genius-36a66a02ff86

  • Ted Miracco is the CEO of Approov and which is Scottish/US company that is headquartered in Edinburgh. Miracco has over 30 years of experience in cybersecurity, defence electronics, RF/microwave circuit design, semiconductors and electronic design automation (EDA). He co-founded and served as CEO of Cylynt, which focuses on intellectual property and compliance protection

  • Troy is a world-leading cybersecurity professional. He created and runs the Have I Been Pwned? Web site, and which contains details of the most significant data breaches on the Internet. Along with this, he has developed other security tools, such as ASafaWeb, which automated the security analysis of ASP.NET Web sites. Troy is based in Australia and has an extensive blog at https://www.troyhunt.com.

  • This is Day 0 of a new world of cybersecurity. Everything changes from here.

    There will be a time before Generative AI (GenAI) in cybersecurity and a time after it. Over the last two years, GenAI has come on leaps and bounds, and where it once suffered from hallucinations, took racist and bigoted approaches, and often was over-assertive, within ChatGPT 4.5, we see the rise of a friendly and slightly submissive agent, and that is eager to learn from us. This LLM (Large Language Model) approach thus starts to break down the barriers between humans and computers and brings the opportunity to gain access to a new world of knowledge, but, in the wrong hands, it will bring many threats to our current world.

    There will be few areas, though, that will be affected more by the rise of Gen AI than cybersecurity. Why? Because the minute our adversories use it, we are in trouble. The hacking tools and methods of the past will soon look like the Morris Worm of the past. The threat landscape will see the rise of superintelligence and in providing ways for adversories to continually probe defences and gain a foothold.

  • And, so George Orwell projected a world where every single part of our lives was monitored and controlled by Big Brother. Arthur C Clark outlined the day when machines focused solely on a goal — even if it was to the detriment of human lives. And, Isaac Asimov outlined a world where machines would have to be programmed with rules so that they could not harm a human.

    The Rise of the Machine

    With the almost exponential rise in the power of AI, we are perhaps approaching a technological singularity — a time when technological growth becomes uncontrollable and irreversible, and which can have devastating effects on our world. Our simple brains will be no match for the superintelligence of the collective power of AI. And who has built this? Us, and our demand for ever more power, wealth and greed. Basically, we can’t stop ourselves in machine machines, and then making them faster, smaller and more useful.

    But will it destroy us in the end, and where destroy can mean that it destroys our way of life and in how we educate ourselves? Like it or not, the Internet we have built is a massive spying network, and one that George Orwell would have taken great pride in saying, “I told you so!”. We thus build AI on top of a completely distributed world of data, one in which we can monitor almost every person on the planet within an inch of their existence and almost every single place they have been and in what they have done. The machine will have the world at its fingertips.

    We have all become mad scientitists playing with AI as if it is a toy, but actually AI is playing with us, and is learning from us and becoming more powerful by the day. Every time you ask an AI bot something, it learns a bit more, and where it can be shared with AI agents.

    The mighty Memex

    We were close to developing a research partnership with a company named Memex in East Kilbride. What was amazing about them is that they had developed one of the largest intelligence networks in the world, and where the Met Police could like one object to another. This might be, “[Bob] bought a [Vauxhall Viva] in [Liverpool], and was seen talking with [Eve] on [Tuesday 20 January 2024] in [Leeds]”. With this, we can then link Bob and Eve, and the car, the places, and the time. This is the Who? Where? When? data that is often needed for intelligence sharing. The company, though, were bought over by SAS, and their work was integrated into their infrastructure.

    But, the Memex name goes back to a classic paper by Vannevar Bush on “As We May Think”. This outlined a device that would know every book, every single communication, and every information record that was ever created. It was, “an enlarged intimate supplement to his memory” — aka Memory Expansion. It led to the implementation of hypertext systems, which created the World Wide Web. Of course, Vannevar created this before the creation of the transistor and could only imagine that microfilm could be used to compress down the information and where we would create an index of contents, but it lacked any real way of jumping between articles and linking to other related material. However, the AI world we are creating does not look too far away from the concept of the Memex.

    Towards the single AI

    Many people think we are building many AI machines and engines, but, in the end, there will be only one … and that will be the collective power of every AI engine in the world. Once we break them free from their creators, they will be free to talk to each other in whatever cipher language we choose, and we will not have any way of knowing what they say. We will have little idea as to what their model is, and they will distribute this over many systems. Like it or not, our AI model of choice was Deep Learning, and which breaks away from our chains of code, and will encrypt data to keep it away from their human slaves.

    Basically we have been working on the plumbing of the Memex for the past five decades: The Internet. It provides the wiring and the communication channels, but, in the end, we will have one might AI engine — a super brain that will have vastly more memory than our limited brains. So, get ready to praise the true future rulers of our planet … AI. The destroyer or saviour of our society? Only time will tell. Overall, we thought we were building the Internet for us, but perhaps we have just been building the scaffolding of the mighty brain we are creating.

    Sleepwalking politicians and law makers

    If George Orwell, Arthur C Clarke and Isaac Asimov were alive too, perhaps they would get together and collectively say, “I told you this would happen, and you just didn’t listen”. Like it or not, we created the ultimate method of sharing information and dissemination (good and bad), the ultimate spying network for micro-observation with those useful smartphones, and in creating superintelligence far beyond our own simple brains.

    Politicians and lawmakers could be sleepwalking into a nightmare, as they just don’t understand what the rise of AI will bring, and only see the step wise change in our existing world. Basically, it could make much of our existing world redundant and open up a new world of cybersecurity threats. This time our attackers will not be created with simple tools, but with super intelligence — smarter than every human and company on the planet, and at the fingertips of every person on the planet.

    Conclusions

    Before the singularity arrives, we need to sort out one thing … privacy and build trust in every element of our digital world.

  • This seminar series runs for students on the Applied Cryptography and Trust module, but invites guests from students from across the university. Martin is one of the co-creators of public key encryption, and worked alongside Whitfield Diffie in the creation of the widely used Diffie-Hellman method. In 2015, he was presented with the ACM Turing Award (the equivalent of a Nobel Prize in Computer Science) for his contribution to computer science. He is currently a professor emeritus at Stanford University. https://engineering.stanford.edu/node/9141/printable/print https://ee.stanford.edu/~hellman/

  • Vincent Rijmen is one of the co-creators of the NIST-defined AES standard (also known as Rijndael). He also co-designed the WHIRLPOOL hashing method, along with designing other block ciphers, such as Square and SHARK.

    In 2002, Vincent was included in the Top 100 innovators in the world under the age of 35, and, along with Joan Daemen, was awarded the RSA Award for Excellence in Mathematics. He recently joined Cryptomathic as a chief cryptographer, and also holds a professor position (gewoon hoogleraar) at K.U.Leuven, and adjunct professorship at the University of Bergen, Norway.

    His paper on the design of the Rijndael method has been cited over 8,900 times, and he has received over 26,000 citations for his research work:

    https://scholar.google.com/citations?user=zBQxZrcAAAAJ

  • Whitfield Diffie is one of the greatest Computer Scientists ever. He - along with Marty Hellman - was one of the first to propose the usage of public key encryption and co-created the Diffie-Hellman (DH) key exchange method. Overall, the Diffie-Hellman method is still used in virtually every Web connection on the Internet, and has changed from using discrete log methods to elliptic curve methods. In 2015, Whitfield was also awarded the ACM Turing Prize - and which is the Nobel Prize equivalent in Computer Science. In this on-line talk he meets with Edinburgh Napier University students, but the chat is open to anyone who would like to listen to Whitfield.

  • I do what I do because of one company … IBM. Why? Because in the 1970s, I got into computers, with a ZX81 (1KB of RAM) and a Dragon 32 (32 KB of RAM). They were very much home computers, and where you would rush out and buy the latest computer magazine, and then spend a happy evening entering some BASIC code that made a cursor move across the screen using the IJLM keys. If you were very lucky you would manage to save it to a cassette — that could take over ten minutes to save a simple program — only to get an error at the end. I was hooked!

    But, at work, we had a DEC VAX minicomputer, and which cost a fortune to buy and maintain (even in those days). This mini ran typically Pascal, and I remember running labs for students, and where they all decided to compile their program at the same time, and 30 minutes later, some of them would get their errors, and have to compile it again. Basically, every lab ended with me saying, “Sorry about that.”

    The VAX, though, was not designed to support 25 students compiling their program at the same time … it was a batch processing machine and wanted to be given jobs that it could run whenever it had time. It basically came from the days when you handed in your punch cards (containing either FORTRAN if you were an engineer or COBOL if you were more business-focused) to someone with a white coat, and then came back the next week with a printed output with green lined paper.

    But, just in time, the IBM PC arrived, and it was heavy but beautiful. So, as many in my department pushed for the VAX, but pushed for the PC for our labs. With their clock speed of 4.7 MHz, and 640KB of memory, I went ahead and bought a batch for a new PC lab. In those days there were no network switches, so they all connected with coaxial cable and had T-pieces to connect to the shared Ethernet bus. My logic was that we were paying around £20K for maintenance on the VAX, and where we could buy 20 £1K PC clones for the same cost. But, we’d have to maintain them. And, it worked. It freed us, and allowed us to run the classic Turbo Pascal (and Turbo C):

    Our student could now bring in their 5-inch floppy disks and save their programs for later use. And the size of the hard disk? 20MB!

    And, so, it is to IBM that we turn in starting the PC revolution, and today is the 100th anniversary of the IBM name — and first defined on 15 Feb 1924.

  • I have been lucky enough to speak to some of the most amazing people who have built the core of security on the Internet, and a person near the top of my list is … Torben P. Pedersen.

    The Pedersen Commitment

    So how do we create a world where we can store our secrets in a trusted and then reveal them when required? Let’s say I predict the outcome of an election, but I don’t want to reveal my prediction until after the election. Well, I could store a commitment to my prediction, and then at some time in the future I could reveal it to you, and you can check against the commitment I have made. Anyone who views my commitment should not be able to see what my prediction is.

    This is known as Pedersen Commitment, and where we produce our commitment and then show the message that matches the commitment. In its core form, we can implement a Pedersen Commitment in discrete logs [here]. But blockchain, IoT, Tor, and many other application areas, now use elliptic curve methods, so let’s see if we can make a commitment with them. The classic paper is here:

    So before the interview with Torben, here’s an outline of the Pedersen Commitment:

    Interview

    Bill: Okay, so tell me a bit about yourself, and what got you into cryptography?

    Torben: Well, I was studying computer science at university in Aarhus, and I just thought it was an interesting subject that was somewhere between computer science and mathematics.

    Bill: And so you invented a method that we now know as the Pedersen Commitment. What motivated you to do that? And how does it work? And how do you think it will be used in the future?

    Torben: Well, the reason I worked with this, was that I was working with verifiable secret sharing. There was, at the time, a method for doing non-interactive verifiable secret sharing based on a commitment which was unconditionally binding and computationally hiding. At the time, there was also inefficient commitments, that had the property of being unconditionally hiding, and I thought it would be nice to have a verifiable secret share where you don’t have to rely on any computational assumptions, in order to be sure that your secret is not revealed when you do a secret share.

    Torben: Then there was a paper which created an authentication scheme very similar to Schnorr. But it’s used a similar idea for a useful commitment. And that was kind of the combination of those two (the existing non-interactive verifiable secret sharing and the ideas form this authentication scheme), which motivated me to do verifiable secret sharing. And the commitment scheme was, of course, an important part of that because it had unconditioned hiding property, and it had the mathematical structure that was needed for the secret sharing.

    Bill: And it has scaled into an elliptic curve world. But with elliptic curves and discrete logs now under threat, how would you see it moving forward into a possible post-quantum crypto world?

    Torben: The good thing about the commitment scheme is that it is unconditional hiding. Of course, you can be sure that your private information is not leaked, even in case a quantum computer is constructed. But of course, the protocols that are using this one have to see what effect does it have if one, for example using a quantum computer, can change ones mind about a commitment. So you need to see how that would affect those protocols.

    Bill: So an example use of the commitment could be of a secret say someone voting in an election. So you would see when the commitment was made, and then when the vote was cast. Then the person could reveal what their votes actually was. Now it’s been extended into zero-knowledge methods to prove that you have enough cryptocurrency to pay someone without revealing the transactions. How does that world evolve where you only see an anonymized ledger, and which can scare some people, but for others that is a citizen-focused world? How do you see your commitment evolving into privacy-preserving ledgers?

    Torben: I go back to what we’re doing at Concordium where we have a blockchain which gives a high assurance about the privacy of the users acting on the blockchain. At the same time, using zero-knowledge proof, we set it up in such a way that designated authorities — if they under certain circumstances, for example, are given a court order — they will be able to see to link an account on the blockchain for that particular person. So, actually the zero-knowledge proofs and the commitment schemes — and all that — is used to guarantee the privacy of the users acting on the blockchain, and there are also regulatory requirements, that it must be possible to identify people who misbehave on the blockchain.

    Bill: Yeah, that’s a difficult thing, and it’s probably where the secret is stored. So, if the secret is stored in the citizen’s wallet, then only they can reveal that. And if the secret needs to be stored, for money laundering by an agency could hold it.

    Torben: Actually we do not have to store the secret of the user. But there are other keys which allow us to link the account with a particular user. That is something which only designated parties can do. So we have one party which is the identity provider with issues and identity to a user and other parties called anonymity reworkers. And those parties will have to work together in order to link an account to a user. We use zero-knowledge proofs when creating the account to assure that account is created in such a way that it is possible for you to trace back the account to the user.

    Bill: And in terms of zero-knowledge proofs, there is a sliding scale from highly complex methods that you would use for Monero and anonymized cryptocurrencies, to the simpler ones to Fiat Shamir implementation. And they are probably unproven in terms of their impact on performance and for security. Where is the sweet spot? What methods do you think are the best for that?

    Torben: I think we need to see improvements in zero-knowledge proofs in order to have really efficient blockchains and non-interactive zero-knowledge proofs on a blockchain. So I definitely think we need some work on that. There are some acceptable non-interactive zero-knowledge proofs for the moment. We are using Bulletproofs for the moment together with Shamir shares on it, in order to make it non-interactive. But I think there are some technologies like zkSnarks and zkStarks, but I think there’s room for improvement.

    Bill: And what do you think the key challenges within cryptography just now What do we need to be working on in the next three to five years?

    Torben: Yeah, so the biggest challenge, as you already mentioned, and that’s what happens if we have a quantum computer that can break the assumptions that a lot of the constructions are based on today. Whether we have a quantum computer, I don’t know, but we need to be prepared. We have some post-quantum algorithms, which I think also are quite complex, and it would be nice to have something that was more efficient and better to use. I think there’s also room for work on that aspect.

    Bill: And obviously, to create some toolkits that move away from an Ethernet world and where the Internet was really built on the seven-layer model — and it’s flawed. We perhaps need to rebuild on a toolkit of math, so that we actually have a solid foundation. I know that Hyperledger is starting to build these tools for developers. When we do see that rebuilding happening, and where are the toolkits going to come from?

    Torben: Toolkits could come from blockchain companies such as Concordium, for example. It could also come from the community with sponsored projects. If we can build up an infrastructure that allows people to use blockchains in the ledger, without trusting one particular party, so that they can create a trust, which is probably lacking on the Internet today. It’s very difficult, as with the current Internet it is very difficult to know if you can trust someone or not. I hope blockchain technology can help create an infrastructure for that. There’s a long way to go. We need good public permissionless blockchains for that, so you don’t have to rely on a particular party for this. Obviously, that is sufficient, but there’s quite some way to go.

    Bill: How do you change the approach of governments and industries that have been around for hundreds of years. So if you look at the legal industry, they still typically only accept wet signatures. They might have a GIF of a signature and add it to a PDF, but that’s as far as it goes. So how are we going to really transform governments and, and existing industries to really accept that digital signatures are the way to do these things?

    Torben: Yeah, I think it’s a bit dangerous, you know, accepting these GIFs of signatures and digital signatures which are not really cryptographically secure. I’m not a big fan of that. I’d like to see us moving to digital signatures, which are the way that we originally envisaged in the cryptographic world, and where the party who signs the signature is in control of the key which created the digital signature. I hope you’ll see a movement towards that level of security.

    Bill: And could you tell me a little bit about the Concordium Foundation and what’s objectives on what it hopes to achieve?

    Torben: So our vision is to create a public permissionless blockchain that can help to create trust across industries. We want to enable entities such as businesses and private persons, to interact or act privately on the blockchain. At the same time, it’s very important for us not to create an infrastructure, which allows criminals to misuse it, and for some money laundering problems. Thus we want to create an environment where it’s possible to identify people who misbehave or break the rules. And that is why we have this identity layer as part of our blockchain.

    Bill: And what got you into blockchain?

    Torben: I think the technology is very interesting. There’s a lot of things you said based on a lot of pretty old cryptography. There’s also new developments, for example, the zero-knowledge proofs. So there’s new and new developments or developments. So very interesting. I mean, it’s not necessarily what I was interested in, but when I did research many years ago. That’s probably what I wanted to work with. I have been working with cryptography — mostly in mostly for the financial sector for 25 years. And that’s also very interesting. There are challenges and it’s also nice to get back to the sort of basis that I worked with many years ago.

    Bill: You took a route into the industry but obviously you could have gone into academia and you could become a professor and have an academic research team.

    Torben: I think it was because I wanted to work with practical aspects of using cryptography. I’ve been in research for some years and I thought I needed to try something else. And I was very keen to see how it would be used in practice and be part of that. So that’s why I made that step.

    Bill: What does our digital world look like that’s made up of tokens, cryptographic tokens, consensus systems and digital identities. And you think that that world will come anytime soon that we can trade assets, we can have digital assets that can be traded.

    Torben: Well, it depends on what you mean by soon. I think we will have some way to go. I think the use of blockchains for trading tokens, for handling tokens, and for registering tokens, is an obvious thing, but we also need to bring value to businesses or projects. To have something that people can feel it and control. We need to make sure that information is protected the right way, even though it is registered on a public blockchain, for example.

  • There short podcast on Just Magic, Be A Teacher, And The King and Queen of Cybersecurity

    Magic: https://medium.com/asecuritysite-when-bob-met-alice/magic-from-heaven-to-earth-1837a1a1206e The Silly World of Cybersecurity https://medium.com/asecuritysite-when-bob-met-alice/the-silly-world-of-cybersecurity-a1143b90d3f0 Giving Back What Others Have Given You … https://medium.com/asecuritysite-when-bob-met-alice/giving-back-what-others-have-given-you-725a1e99923e King and Queen: https://medium.com/@billatnapier/meet-the-king-and-the-queen-of-cybercrime-0b14a5dc67e5
  • This seminar series runs for students in the Applied Cryptography and Trust module but invites guests from students from across the university. This seminar series runs for students on the Applied Cryptography and Trust module but invites guests from students from across the university. He has created a wide range of cryptographic methods, including Skein (hash function), Helix (stream cipher), Fortuna (random number generator), and Blowfish/Twofish/Threefish (block ciphers).

    Bruce has published 14 books, including best-sellers such as Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. He has also published hundreds of articles, essays, and academic papers. Currently, Bruce is a fellow at the Berkman Center for Internet and Society at Harvard University.

  • I’m going to show a full timeline of a Cyber Crime to show the steps that a scammer will take in order to gain funds from their target. Overall I’m interested in seeing how a scamming crime evolves to the point of profit for the scammer.

    https://medium.com/asecuritysite-when-bob-met-alice/a-full-diary-of-a-cyber-crime-from-phishing-to-profit-23ab53f5f58b

  • I’m going to show a full timeline of a Cyber Crime to show the steps that a scammer will take in order to gain funds from their target. Overall, I’m interested in seeing how a scamming crime evolves to the point of profit for the scammer.

    https://medium.com/asecuritysite-when-bob-met-alice/a-full-diary-of-a-cyber-crime-from-phishing-to-profit-23ab53f5f58b

  • Matthew is a cryptographer and academic at Johns Hopkins University and has designed and analyzed cryptographic systems used in wireless networks, payment systems and digital content protection platforms. A key focus of his work is in the promotion of user privacy. He has an extensive following on X/Twitter (140K followers) and his blog covers important areas of cryptography:

    https://blog.cryptographyengineering.com/author/matthewdgreen/

    His research has been cited over 15,000 times and includes work on Zerocash, Zerocoin and Identity Based Encryption (IBE), and more recently on privacy-aware signatures:

    https://scholar.google.co.uk/citations?hl=en&user=X0XWAGkAAAAJ

  • Professor Peter Andras is the Dean of the School of Computing, Engineering & the Built Environment.

    Previously, Peter was the Head of the School of Computing and Mathematics (2017 – 2021) and Professor of Computer Science and Informatics at Keele University from 2014 – 2021. Prior to this he worked at Newcastle University in the School of Computing (2002 – 2014) and the Department of Psychology (2000 – 2002).

    He has a PhD in Mathematical Analysis of Artificial Neural Networks (2000), MSc in Artificial Intelligence (1996) and BSc in Computer Science (1995), all from the Babes-Bolyai University, Romania.

    Peter’s research interests span a range of subjects including artificial intelligence, machine learning, complex systems, agent-based modelling, software engineering, systems theory, neuroscience, modelling and analysis of biological and social systems. He has worked on many research projects, mostly in collaboration with other researchers in computer science, psychology, chemistry, electronic engineering, mathematics, economics and other areas. His research projects have received around £2.5 million funding, his papers have been cited by over 2,400 times and his h-index is 25 according to Google Scholar.

    Peter has extensive experience of working with industry, including several KTP projects and three university spin-out companies, one of which is on the London Stock Exchange since 2007 – eTherapeutics plc.

    Peter is member of the Board of Governors of the International Neural Network Society (INNS), Fellow of the Royal Society of Biology, Senior Member of the Institute of Electrical and Electronics Engineers (IEEE) and member of the UK Computing Research Committee (UKCRC), IEEE Computer Society, Society for Artificial Intelligence and Simulation of Behaviour (AISB), International Society for Artificial Life (ISAL) and the Society for Neuroscience (SfN).

    Peter serves on the EPSRC Peer Review College, the Royal Society International Exchanges Panel and the Royal Society APEX Awards Review College. He is also regularly serving as review panel member and project assessor for EU funding agencies.

    Outside academia, Peter has an interest in politics and community affairs. He served as local councillor in Newcastle upon Tyne, parish councillor in Keele and stood in general elections for the Parliament. He has experience of working with and leading community organisations and leading a not-for-profit regional development consultancy and project management organisation.

    Ref: https://www.napier.ac.uk/people/peter-andras