A tiny snippet for reading files chunk by chunk in plain ...

Over-Optimizing for Performance

Recently on the csharp subreddit, the post C# 9.0 records: immutable classes linked to a surprisingly controversial article discussing how C# 9.0's records are, underneath it all, immutable classes. The comments are full of back-&-forth over whether one should use records for ease or structs for performance. The pro-struct argument revolved around the belief that performance should always be a developer's #1 priority, and anything less was the realm of the laggard.
Here is a real-world example that shows with stark clarity why that kind of thinking is wrong.
Consider the following scenario:

1

You're working on a game with dozens, maybe hundreds of people on the team; you don't know because when you were cross with facilities about them removing all the fluorescents, you got accused of being against the new energy saving initiative. Now you swim in a malevolent ocean of darkness that on some very late nights alone in the office, you swear is actively trying to consume you.
 

2

The team that preceded you inherited an engine that is older than OOP, when source repositories were stacks of 8-inch floppies, and it looked as if Jefferson Starship was going to take over the world. One year ago they bequeathed upon the company this nightmare of broken, undocumented GOTO spaghetti & anti-patterns. You're convinced this was their sadistic revenge for all getting fired post-acquisition.
 

3

Management denied your request to get headcount for an additional technical artist, but helpfully supplied you with an overly nervous intern. After several weeks working alongside them, you're beginning to suspect they're pursuing something other than a liberal arts degree.
 

4

Despite the many getting started guides you spent countless evenings writing, the endless brownbags nobody attended, and the daily dozen emails you forward to oppressively inquisitive artists comprised of a single passive-aggressive sentence suggesting they scroll down to the part that begins FW: FW: FW: FW: FW: FW: RE: WE BROKE TOOL NEED WORKAROUND ASAP ...
 
...yes, despite all of that, the engineering team still spent days tracking down why the game kept crashing with Error 107221: У вас ошибка after re-re-re-re-re-throwing an ex_exception when it couldn't (and should never even try to) load a 16K-textured floor mat.
 

5

Despite your many attempts to politely excuse yourself, one blissfully unaware artist exhausts 48 minutes of your lunch break explaining how the Pitchfork review for the latest "dope slab" of this TikTok-Instagram-naphouse artist you never heard of was just sooooo unfair.
 
And then in their hurry to finish up & catch the 2:30 PM bus home, they forget to toggle Compress To CXIFF (Custom Extended Interchange File Format), set the Compression slider 5/6ths of the way between -3 & -2, look to their left, look to their right, click Export As .MA 0.9.3alpha7, and make absolutely, positively, 100% SURE not to be working in prod. And THAT is how the game explodicated.
 

6

You know better than anyone the intermediate file format the main game loop passes to Game.dll, memory mapping it as a reverse top-middle Endian binary structure.
 
You know for 381 of the parameter fields what their 2-7 character names probably mean.
 
YOU know which 147 fields always have to be included, but with a null value, and that the field ah_xlut must ALWAYS be set to 0 unless it's Thursday, in which case that blackbox from hell requires its internal string equivalent: TRUE.
 
YOU know that the two tech artists & one rapidly aging intern that report to you would totally overhaul tooling so artists would never "happen" again, but there just aren't enough winters, springs, summers, falls, July 4ths, Christmas breaks, Presidents Days, and wedding anniversaries in a year to properly do so.
 

7

If you could just find the time between morning standups, after lunch standups, watersprint post-mortems, Milbert's daily wasting of an hour at your desk trying to convince you engineering should just rebuild the engine from the ground up in JavaScript & React, & HR's mandatory EKG Monitor job satisfaction surveys, you might be able to get at least some desperately-needed tooling done.
 
And so somehow you do. A blurry evening or two here. A 3:00 AM there. Sometimes just a solitary lunch hour.
 
Your dog no longer recognizes you.
 
You miss your wife calling to say she's finally cleaning out the hall closet and if you want to keep this box of old cards & something in plastic that says Underground Sea Beta 9.8 Grade, you better call her back immediately.
 
And your Aunt Midge, who doesn't understand how SMS works, bombards you one evening:
your father is...
no longer with us...
they found him...
1 week ago...
in an abandoned Piggly Wiggly...
by an old culvert...
split up...
he was then...
laid down to rest...
sent to St. Peter's...
and your father...
he's in a better place now...
don't worry...
it's totally okay...
we decided we will all go...
up to the mountain
 
You call your sister in a panic and, after a tidal wave of confusion & soul-rending anxiety, learn it was just Hoboken Wireless sending the messages out of order. This causes you to rapidly cycle.
 

8

On your bipolar's upswing, you find yourself more productive than you've ever been. Your mind is aglow with whirling, transient nodes of thought careening through a cosmic vapor of invention. It's like your brain is on 200mg of pure grade Adderall.
 
Your fingers ablaze with records, clean inheritance, beautiful pattern matching, bountiful expression syntax, aircraft carriers of green text that generate the most outstanding CHM for an internal tool the world has ever seen. Readable. PERFECTLY SOLID.
 
After much effort, you gaze upon the completed GUI of your magnum opus with the kind of pride you imagine one would feel if they hadn't missed the birth of their son. Clean, customer-grade WPF; tooltips for every control; sanity checks left & right; support for plugins & light scripting. It's even integrated with source control!
 
THOSE GODDAMNED ARTISTS CAN'T FAIL. YOUR PIPELINE TOOL WON'T LET THEM.
 
All they have to do is drag content into the application window, select an options template or use the one your tool suggests after content analysis, change a few options, click Export, and wait for 3-5 minutes to generate Game.dll-compatible binary.
 
Your optimism shines through the commit summary, your test plan giddy & carefree. With great anticipation, you await code review.
 

9

A week goes by. Then two. Then three. Nothing. The repeated pinging of engineers, unanswered.
 
Two months in you've begun to lose hope. Three months, the pangs of defeat. Four months, you write a blog post about how fatalism isn't an emotion or outlook, but the TRANSCENDENCE of their sum. Two years pass by. You are become apathy, destroyer of wills.
 

10

December 23rd, 2022: the annual Winter Holidays 2-hour work event. The bar is open, the Kokanee & Schmidt's flowing (max: 2 drink tickets). The mood a year-high ambivalent; the social distancing: acceptable. They even have Pabst Blue Ribbon, a beer so good it won an award once.
 
Standing beside you are your direct reports, Dave "Macroman" Thorgletop and wide-eyed The Intern, the 3 of you forming a triumvirate of who gives a shit. Dave is droning on & on about a recent family trip to Myrtle Beach. You pick up something something "can you believe that's when my daughter Beth scooped up a dead jellyfish? Ain't that something? A dead jellyfish," and "they even had a Ron Jons!"
 
You barely hear him, lost as you are in thought: "I wish I had 2 days of vacation." You stare down ruefully at your tallboy.
 
From the corner of your eye you spot Milbert, index finger pointed upward, face a look of pure excitement.
 
"Did I tell you about my OpenWinamp project? It's up on SourceForge", he says as he strides over. It's unsettling how fast this man is.
 
"JAVASCRIPT IS JUST A SUBSET OF JAVA!" you yell behind you, tossing the words at him like a German potato masher as you power walk away. It does its job, stopping Milbert dead in his tracks.
 
Dave snickers. The Intern keeps staring wide-eyed. You position yourself somewhat close to the studio's 3 young receptionists, hoping they serve as a kind of ritual circle of protection.
 
It works... kind of. Milbert is now standing uncomfortably close to The Intern, Dave nowhere to be seen.
 
From across the room you distinctly hear "Think about it, the 1st-person UI could be Lua-driven Electron."
 
The Intern clearly understands that words are being spoken to them, but does not comprehend their meaning.
 
You briefly feel sorry for the sacrificial lamb.
 

11

You slide across the wall, putting even more distance between you & boredom made man. That's when you spot him, arrogantly aloof in the corner: Glen Glengerry. Core engineering's most senior developer.
 
Working his way up from a 16-year old game tester making $4.35 an hour plus free Dr. Shasta, to pulling in a cool $120K just 27-years later, plus benefits & Topo Chicos. His coding style guides catechism, his Slack pronouncements ex cathedra; he might as well be CTO.
 
You feel lucky your team is embedded with the artists. You may have sat through their meetings wondering why the hell you should care about color theory, artistic consistency, & debates about whether HSL or CMYK was the superior color space (spoiler: it's HSL), you were independent and to them, a fucking code wizard, man.
 
And there he stands, this pseudo-legend, so close you could throw a stapler at him. Thinning grey-blonde tendrils hanging down from his CodeWarrior hat, white tee with This Guy VIMs on the back, tucked into light blue jeans. He's staring out into the lobby at everything and yet... nothing all at.
 

12

Maybe it's the 4.8% ABV. Maybe it's the years of crushing down anger into a singularity, waiting for it to undergo rapid fiery expansion, a Big Bang of righteous fury. Maybe it's those sandals with white socks. Maybe it's all three. But whatever it is, it's as if God himself compels you to march over & give him a piece of your mind, seniority be damned.
 
"Listen, you big dumb bastard..."
 
That... is maybe a little too aggressive. But Glen Glengerry barely reacts. Pulling a flask out of his back pocket, he doesn't look over as he passes it to you.
 
Ugh. Apple Pucker.
 

13

"I thought bringing in your own alcohol was against company policy", wiping sticky green sludge from your lips. He turns with a look of pure disdain & snorts.
 
"You think they're going to tell ME what I can & can't bring in?" He grabs the flask back, taking a big swig.
 
For what feels like an eternity, you both stand in silence. You swallow, speaking softly. "None of you even looked at my code. I worked very, very hard on that. My performance review for that year simply read 'recommend performance improvement plan." The words need no further context.
 
"I know", Glen² replies. "That was me."
 

14

Now you're not a weak man, and maybe in some other circumstance you would have punched him in the goddamn lip. But you feel nothing, just a hollowness inside. "Why?", you ask, wondering if the answer would even matter.
 
"Because you don't use Bulgarian notation. Because your method names aren't lower camel case. Because good code doesn't require comments. Because you use classes & records over more performant structs, pointlessly burdening the heapstack. BECAUSE. YOUR CODE. IS. SHIT."
 
You clinch your fists so tightly the knuckles whiten.
 

15

He looks away from you, taking another sip of green goo. "You're not a coder. You're an artist masquerading as one" he speaks, as if it were fact.
 
The only thing artistic about you is the ability to create user-friendly internal tooling using nothing but a UI framework, broken down garbage nobody wants to touch, & sheer willpower. If your son's life depended on you getting accepted into art instruction school, you couldn't even draw a turtle.
 
He doesn't pause. "I'll champion ruthless micro-optimization until the day I die. But buddy, I'm going to let you in on a little secret: you aren't here to improve workflow. You're here to LOOK like you're doing something NOBODY else can."
 
He goes on. "What do you think those artists are going to do when they have to stare at a progress bar for 4, 5 minutes? They're going to complain your tool is slow."
 
"Sure, it may take them 20, 30 minutes to do it the old way, there'll be an error, and either they'll stare at it for 30 minutes before adding that missing semi-colon or they'll come get you. And you'll fix it. And 1 week later, they won't remember how. And you'll stay employed. And every. Body. Wins."
 

16

A little bit of the pride, the caring, wells back up inside from somewhere long forgotten.
 
"You don't think we should care about rapid application development & KISS, quickly getting things out that help our team, instead devoting ourselves to shaving off ticks here & there? What do you think artists are going to do with those 4 minutes you talk about?
 
You don't stop. "I'll tell you what they'll do. They'll 9GAG for 20 minutes straight. They'll listen to podcasts about dialectical materialism vis-a-vis the neo-feudalism that is a natural extension of the modern world's capitalist prison. They'll Reddit."
 
His silence gives you the bravery to push the limits.
 
"Christ, man. Are you only in it for the $120K..."
 
He corrects you: "...$123K."
 
"...only in it for the $123K/year? The free snacks from the microkitchen? The adulation? Have you no sense of comraderie?? No desire to push us to something better?! No integrity?!!!"
 
His eyes sharply narrow, face creases in anger. You clearly have overstepped your bounds.
 

17

"You think I don't have integrity? No sense of teamwork? I'm only in it for the cold cash? You think I don't care about you all?", he roars.
 
A light volley of small green flecks land on your face.
 
"Why do you think they made a 16-year old tester the lead developer of a 1993 Doom clone?! Because my code was clean & painless to work with?! Because I made coding look easy?! No! IT WAS BECAUSE I WAS A GOD TO THEM.
 
And from a God, a PANTHEON. We built monuments to over-engineering! We crafted that of 7 weeks onboarding, that of immortal bugs, demonic hosts spawned by legion from the very loins of a fix. It took 2 years before a developer could BEGIN to feel confident they knew what they were doing. And by that time, they were one of US!
 
You think the team we laid off November '19 was fired because they were bad at their jobs? NO! It was because they worked themselves out of one. They didn't leave us a broken pipeline. They left an internal Wiki, a wealth of tools & example projects, and a completely transparent code base.
 
We couldn't have THAT, now could we? No, we couldn't. So we got rid of it. ALL OF IT. Poof. Gone. Just like that. Before anyone even knew a THING."
 
He leans forward, so close his psoriasis almost touches yours.  
With an intensity that borders on frightening, he whispers "You think they left us Game.dll? I fucking *MADE** Game.dll."*
 
The words hit hard like a freight train.
 

18

And without another word, he turns & leaves. You're left there, alone, coworkers milling about, with only one thought.
     
Were one to get a hobby, should it be cocaine?
 

In Conclusion

It's these kinds of situations that make me believe there are far more important considerations than a ruthless dedication to performance, even in the game industry as my real-world scenario so clearly demonstrates.
 
Like, records are cool & shit.
submitted by form_d_k to shittyprogramming [link] [comments]

Node.js Application Monitoring with Prometheus and Grafana

Hi guys, we published this article on our blog (here) some time ago and I thought it could be interesting for node to read is as well, since we got some good feedback on it!

What is application monitoring and why is it necessary?

Application monitoring is a method that uses software tools to gain insights into your software deployments. This can be achieved by simple health checks to see if the server is available to more advanced setups where a monitoring library is integrated into your server that sends data to a dedicated monitoring service. It can even involve the client side of your application, offering more detailed insights into the user experience.
For every developer, monitoring should be a crucial part of the daily work, because you need to know how the software behaves in production. You can let your testers work with your system and try to mock interactions or high loads, but these techniques will never be the same as the real production workload.

What is Prometheus and how does it work?

Prometheus is an open-source monitoring system that was created in 2012 by Soundcloud. In 2016, Prometheus became the second project (following Kubernetes) to be hosted by the Cloud Native Computing Foundation.
https://preview.redd.it/8kshgh0qpor51.png?width=1460&format=png&auto=webp&s=455c37b1b1b168d732e391a882598e165c42501a
The Prometheus server collects metrics from your servers and other monitoring targets by pulling their metric endpoints over HTTP at a predefined time interval. For ephemeral and batch jobs, for which metrics can't be scraped periodically due to their short-lived nature, Prometheus offers a Pushgateway. This is an intermediate server that monitoring targets can push their metrics before exiting. The data is retained there until the Prometheus server pulls it later.
The core data structure of Prometheus is the time series, which is essentially a list of timestamped values that are grouped by metric.
With PromQL (Prometheus Query Language), Prometheus provides a functional query language allowing for selection and aggregation of time series data in real-time. The result of a query can be viewed directly in the Prometheus web UI, or consumed by external systems such as Grafana via the HTTP API.

How to use prom-client to export metrics in Node.js for Prometheus?

prom-client is the most popular Prometheus client library for Node.js. It provides the building blocks to export metrics to Prometheus via the pull and push methods and supports all Prometheus metric types such as histogram, summaries, gauges and counters.

Setup sample Node.js project

Create a new directory and set up the Node.js project:
$ mkdir example-nodejs-app $ cd example-nodejs-app $ npm init -y 

Install prom-client

The prom-client npm module can be installed via:
$ npm install prom-client 

Exposing default metrics

Every Prometheus client library comes with predefined default metrics that are assumed to be good for all applications on the specific runtime. The prom-client library also follows this convention. The default metrics are useful for monitoring the usage of resources such as memory and CPU.
You can capture and expose the default metrics with following code snippet:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Define the HTTP server const server = http.createServer(async (req, res) => { // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 

Exposing custom metrics

While default metrics are a good starting point, at some point, you’ll need to define custom metrics in order to stay on top of things.
Capturing and exposing a custom metric for HTTP request durations might look like this:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Create a histogram metric const httpRequestDurationMicroseconds = new client.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in microseconds', labelNames: ['method', 'route', 'code'], buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10] }) // Register the histogram register.registerMetric(httpRequestDurationMicroseconds) // Define the HTTP server const server = http.createServer(async (req, res) => { // Start the timer const end = httpRequestDurationMicroseconds.startTimer() // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } // End timer and add labels end({ route, code: res.statusCode, method: req.method }) }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 
Copy the above code into a file called server.jsand start the Node.js HTTP server with following command:
$ node server.js 
You should now be able to access the metrics via http://localhost:8080/metrics.

How to scrape metrics from Prometheus

Prometheus is available as Docker image and can be configured via a YAML file.
Create a configuration file called prometheus.ymlwith following content:
global: scrape_interval: 5s scrape_configs: - job_name: "example-nodejs-app" static_configs: - targets: ["docker.for.mac.host.internal:8080"] 
The config file tells Prometheus to scrape all targets every 5 seconds. The targets are defined under scrape_configs. On Mac, you need to use docker.for.mac.host.internal as host, so that the Prometheus Docker container can scrape the metrics of the local Node.js HTTP server. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the docker run command to start the Prometheus Docker container and mount the configuration file (prometheus.yml):
$ docker run --rm -p 9090:9090 \ -v `pwd`/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus:v2.20.1 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Prometheus Web UI on http://localhost:9090

What is Grafana and how does it work?

Grafana is a web application that allows you to visualize data sources via graphs or charts. It comes with a variety of chart types, allowing you to choose whatever fits your monitoring data needs. Multiple charts are grouped into dashboards in Grafana, so that multiple metrics can be viewed at once.
https://preview.redd.it/vt8jwu8vpor51.png?width=3584&format=png&auto=webp&s=4101843c84cfc6293debcdfc3bdbe70811dab2e9
The metrics displayed in the Grafana charts come from data sources. Prometheus is one of the supported data sources for Grafana, but it can also use other systems, like AWS CloudWatch, or Azure Monitor.
Grafana also allows you to define alerts that will be triggered if certain issues arise, meaning you’ll receive an email notification if something goes wrong. For a more advanced alerting setup checkout the Grafana integration for Opsgenie.

Starting Grafana

Grafana is also available as Docker container. Grafana datasources can be configured via a configuration file.
Create a configuration file called datasources.ymlwith the following content:
apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy orgId: 1 url: http://docker.for.mac.host.internal:9090 basicAuth: false isDefault: true editable: true 
The configuration file specifies Prometheus as a datasource for Grafana. Please note that on Mac, we need to use docker.for.mac.host.internal as host, so that Grafana can access Prometheus. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the following command to start a Grafana Docker container and to mount the configuration file of the datasources (datasources.yml). We also pass some environment variables to disable the login form and to allow anonymous access to Grafana:
$ docker run --rm -p 3000:3000 \ -e GF_AUTH_DISABLE_LOGIN_FORM=true \ -e GF_AUTH_ANONYMOUS_ENABLED=true \ -e GF_AUTH_ANONYMOUS_ORG_ROLE=Admin \ -v `pwd`/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml \ grafana/grafana:7.1.5 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Grafana Web UI on http://localhost:3000

Configuring a Grafana Dashboard

Once the metrics are available in Prometheus, we want to view them in Grafana. This requires creating a dashboard and adding panels to that dashboard:
  1. Go to the Grafana UI at http://localhost:3000, click the + button on the left, and select Dashboard.
  2. In the new dashboard, click on the Add new panel button.
  3. In the Edit panel view, you can select a metric and configure a chart for it.
  4. The Metrics drop-down on the bottom left allows you to choose from the available metrics. Let’s use one of the default metrics for this example.
  5. Type process_resident_memory_bytesinto the Metricsinput and {{app}}into the Legendinput.
  6. On the right panel, enter Memory Usage for the Panel title.
  7. As the unit of the metric is in bytes we need to select bytes(Metric)for the left y-axis in the Axes section, so that the chart is easy to read for humans.
You should now see a chart showing the memory usage of the Node.js HTTP server.
Press Apply to save the panel. Back on the dashboard, click the small "save" symbol at the top right, a pop-up will appear allowing you to save your newly created dashboard for later use.

Setting up alerts in Grafana

Since nobody wants to sit in front of Grafana all day watching and waiting to see if things go wrong, Grafana allows you to define alerts. These alerts regularly check whether a metric adheres to a specific rule, for example, whether the errors per second have exceeded a specific value.
Alerts can be set up for every panel in your dashboards.
  1. Go into the Grafana dashboard we just created.
  2. Click on a panel title and select edit.
  3. Once in the edit view, select "Alerts" from the middle tabs, and press the Create Alertbutton.
  4. In the Conditions section specify 42000000 after IS ABOVE. This tells Grafana to trigger an alert when the Node.js HTTP server consumes more than 42 MB Memory.
  5. Save the alert by pressing the Apply button in the top right.

Sample code repository

We created a code repository that contains a collection of Docker containers with Prometheus, Grafana, and a Node.js sample application. It also contains a Grafana dashboard, which follows the RED monitoring methodology.
Clone the repository:
$ git clone https://github.com/coder-society/nodejs-application-monitoring-with-prometheus-and-grafana.git 
The JavaScript code of the Node.js app is located in the /example-nodejs-app directory. All containers can be started conveniently with docker-compose. Run the following command in the project root directory:
$ docker-compose up -d 
After executing the command, a Node.js app, Grafana, and Prometheus will be running in the background. The charts of the gathered metrics can be accessed and viewed via the Grafana UI at http://localhost:3000/d/1DYaynomMk/example-service-dashboard.
To generate traffic for the Node.js app, we will use the ApacheBench command line tool, which allows sending requests from the command line.
On MacOS, it comes pre-installed by default. On Debian-based Linux distributions, ApacheBench can be installed with the following command:
$ apt-get install apache2-utils 
For Windows, you can download the binaries from Apache Lounge as a ZIP archive. ApacheBench will be named ab.exe in that archive.
This CLI command will run ApacheBench so that it sends 10,000 requests to the /order endpoint of the Node.js app:
$ ab -m POST -n 10000 -c 100 http://localhost:8080/order 
Depending on your hardware, running this command may take some time.
After running the ab command, you can access the Grafana dashboard via http://localhost:3000/d/1DYaynomMk/example-service-dashboard.

Summary

Prometheus is a powerful open-source tool for self-hosted monitoring. It’s a good option for cases in which you don’t want to build from scratch but also don’t want to invest in a SaaS solution.
With a community-supported client library for Node.js and numerous client libraries for other languages, the monitoring of all your systems can be bundled into one place.
Its integration is straightforward, involving just a few lines of code. It can be done directly for long-running services or with help of a push server for short-lived jobs and FaaS-based implementations.
Grafana is also an open-source tool that integrates well with Prometheus. Among the many benefits it offers are flexible configuration, dashboards that allow you to visualize any relevant metric, and alerts to notify of any anomalous behavior.
These two tools combined offer a straightforward way to get insights into your systems. Prometheus offers huge flexibility in terms of metrics gathered and Grafana offers many different graphs to display these metrics. Prometheus and Grafana also integrate so well with each other that it’s surprising they’re not part of one product.
You should now have a good understanding of Prometheus and Grafana and how to make use of them to monitor your Node.js projects in order to gain more insights and confidence in your software deployments.
submitted by matthevva to node [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

How to generate (relative) secure paper wallets and spend them (Newbies)

How to generate (relative) secure paper walletsEveryone is invited to suggest improvements, make it easier, more robust, provide alternativers, comment on what they like or not, and also critizice it.
Also, this is a disclaimer: I'm new to all of this. First, I didn't buy a hardware wallet because they are not produce in my country and I couldnt' trust they are not tampered. So the other way was to generate it myself. (Not your keys not your money) I've instructed myself several weeks reading various ways of generating wallets (including Glacier). As of now, I think this is THE BEST METHOD for a non-technical person which is high security and low cost and not that much lenghty.
FAQs:Why I didn't use Coleman's BIP 39 mnemonic method? Basically, I dont know how to audit the code. As a downside, we will have to really write down accurately our keys having in mind that a mistype is fatal. Also, we should keep in mind that destruction of the key is fatal as well. The user has to secure the key from losing the keys, theft and destruction.
Lets start
You'll need:
Notes: We will be following https://www.swansontec.com/bitcoin-dice.html guidelines. We will be creating our own random key instead of downloading BitAddress javascript for safety reasons. Following this guideline lets you audit the code that will create the public address and bitcoin address. Its simple, short and you can always test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not. This process is done offline, so your private key never touches the internet.
Steps
1. Download the bitcoin-bash-tools and dice2key scripts from Github, latest Ubuntu distribution, and LiLi, A software to install Ubuntu on our flash drive (easier than what is proposed on Swansontec)

2. Install the live environment in a CD or USB, and paste the tools we are going to use inside of it (they are going to be located in file://cdrom)

  • Open up LiLi and insert your flash drive.

  • Make sure you’ve selected the correct drive (click refresh if drive isn’t showing).
  • Choose “ISO/IMG/ZIP” and select the Ubuntu ISO file you’ve downloaded in the previous step.
  • Make sure only “Format the key in FAT32” is selected.
  • Click the lightning bolt to start the format and installation process
  • [https://99bitcoins.com/bitcoin-wallet/pape\](https://99bitcoins.com/bitcoin-wallet/pape)

    3. Open the Ubuntu environment in a offline computer that will never touch the internet again (there is some malware that infect the BIOS so doing it in your regular computer is not safe to my understanding)

    Restart your computer. Clicking F12 or F1 during the boot-up process will allow you to choose to run your operating system from your flash drive or CD. After the Ubuntu operating system loads you will choose the “try Ubuntu” option.
    4. Roll the dice 100 times and convert into a 32-byte hexadecimal number by using dice2key

    To generate a Bitcoin private key using normal, run the following command to convert the dice rolls into a 32-byte hexadecimal number:source dice2key (100 six-sided dice rolls)

    5. Run newBitcoinKey 0x + your private key and it will give you your: public address, bitcoin address and WIF.Save the Private Key and Bitcoin Address. Check several times that you handwritten it correctly. You can check by re entering the code in the console from your paper. (I recommend writing down the Private Key which is in HEX and not the WIF since this one is key sensitive and you can lose it, or write it wrong. Also, out of the private key you can get the WIF which will let you transfer your funds). If you lose your key, you lose your funds. Be careful.
    If auditing the code for this is not enough for you, you can also test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not.
    I recommend you generate several keys and addresses as this process is not super easy to do. Remember that you should never reuse your paper wallets (meaning that you should empty all of the funds from this one adress if you are making a payment). As such, a couple of addresses come handy.
    At this point, there should be no way for information to leak out of the live CD environment. The live CD doesn't store anything on the hard disk, and there is no network connection. Everything that happens from now on will be lost when the computer is rebooted.
    Now, start the "Terminal" program, and type the following command:
    source ~/bitcoin.shThis will load the address-calculation script. Now, use the script to find the Bitcoin address for your private key:
    newBitcoinKey 0x(your dice digits)Replace the part that says "(your dice digits)" with 64 digits found by rolling your pair of hexadecimal dice 32 times. Be sure there is no space between the "0x" and your digits. When all is said and done, your terminal window should look like this:
    [email protected]:~$ source ~/[email protected]:~$ newBitcoinKey 0x8010b1bb119ad37d4b65a1022a314897b1b3614b345974332cb1b9582cf03536---secret exponent: 0x8010B1BB119AD37D4B65A1022A314897B1B3614B345974332CB1B9582CF03536public key: X: 09BA8621AEFD3B6BA4CA6D11A4746E8DF8D35D9B51B383338F627BA7FC732731 Y: 8C3A6EC6ACD33C36328B8FB4349B31671BCD3A192316EA4F6236EE1AE4A7D8C9compressed: WIF: L1WepftUBemj6H4XQovkiW1ARVjxMqaw4oj2kmkYqdG1xTnBcHfC bitcoin address: 1HV3WWx56qD6U5yWYZoLc7WbJPV3zAL6Hiuncompressed: WIF: 5JngqQmHagNTknnCshzVUysLMWAjT23FWs1TgNU5wyFH5SB3hrP bitcoin address: [email protected]:~$The script produces two public addresses from the same private key. The "compressed" address format produces smaller transaction sizes (which means lower transaction fees), but it's newer and not as well-supported as the original "uncompressed" format. Choose which format you like, and write down the "WIF" and "bitcoin address" on a piece of paper. The "WIF" is just the private key, converted to a slightly shorter format that Bitcoin wallet apps prefer.
    Double-check your paper, and reboot your computer. Aside from the copy on the piece of paper, the reboot should destroy all traces of the private key. Since the paper now holds the only copy of the private key, do not lose it, or you will lose the ability to spend any funds sent to the address!
    Conclusion
    With this method you are creating an airgapped environment that will never touch the internet. Also, we are checking that the code we use its not tampered. If this is followed strictly I see virtually no chances of your keys being hacked.
    How to spend your funds from a securely generated paper wallet.
    Almost all tutorials seen online, will let you import or sweep you private keys into the desktop wallet or mobile wallet which are hot wallets. In the meantime, you are exposed and all of your work to secure the cold storage is being thrown away. This method will let you sign the transaction offline (you will not expose your private key in an online system).
    You'll need:
    The source of this method is taken from CryptoGuide from Youtube https://www.youtube.com/watch?v=-9kf9LMnJpI&t=86s . Basically you can follow his video as it is foolproof. Please check that Electrum distribution is signed.
    The summarized steps are:
    Download Electrum on both devices and check its signed for safey.Disconnect your phone from the internet (flight mode= All connections off) and input your private key in ElectrumGenerate the transaction in your desktop and export it via QR (never leave unspent BTC or you will lose them)In your phone, open Electrum > Send > QR (this will import the transaction) and scan the desktop exported transactionSign the transaction in your phone.Export the signed transaction in QRLoad the signed transaction in the desktop Electrum and broadcast it to the network.Wait until 3 confirmations to connect your phone to the internet again.
    Ideas for improvement:
    So thats it. I hope someone can find this helpful or help in creating a better method. If you like, you can donate at 1Che7FG93vDsbes6NPBhYuz29wQoW7qFUH
    submitted by Heron-Express to Bitcoin [link] [comments]

    I made a rudimentary Python "game randomizer" to solve the problem of "how to use the learning curve of programming to deceive people into playing what I want today?"

    [Details of the code after the main body]
    Background
    My group of friends would get together, maybe once every 2 months or so for a big day of games (pre-Covid). Since it would be so long between game days, I'd want to keep all the game options open, but this would just lead to stalemated conversations about what I wanted to play. In the past I had created massive google sheets of games for everyone to vote on in advance of a given day, but it turned prepping a thousand entries of Wingspan onto a spreadsheet into way too much work, despite the good intention of "everyone gets to play what I want and not what I DON'T want."
    Back in Feb, my wife's boyfriend said "You all like most of the games enough to play them, why don't you just pick it randomly?"
    And I was off!
    Now, I'm not a person with a brain at all. I wouldn't even say I'm a hobbyist who owns a set of dice who could assign games to each face and roll them. I had some spare time like two years ago and just learned some Python and some Javascript and some SQL and some Assembly just for the hell of it. Nothing too crazy, but enough to be able to create SOME version of what I envisioned - an artificial general intelligence (AGI) capable of using discrete stochastic logic to pick games of my choosing. I had a lot of fun trying different things and eventually getting something that worked.
    I did some test runs and sent the black box and white box test results to my friends, who had no fucking idea what I was talking about, and they spawned great discussion about AI surpassing human intelligence and rendering us obsolete within the next 20 years. I got a level enthusiasm that far surpassed any of my other life accomplishments. Finally, the stress of being the guy to say "Let me pull out my laptop. I think I left it in my car. Wait, nope its right here... okay just gotta log in here. Sorry guys, my laptops been really slow latel- okay its up. Let me open up python real quick... okay gotta open up my source files here. Alright, here's my code. Shit, why isn't it working? You know what, I realized I commented this part out... not sure why I did that. Okay... hold on. Now, everyone tell me what games you want me to input into my program. Wait, shit, now my laptop has to update. Okay, it says it's going to be 30 minutes, in the meantime, can we PLEASE play ?" was gone!
    We have our first real game day since COVID broke out this weekend, and I'm going to roll the metaphorical dice to see what we're going to play. Here's an example of a roll:
    You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork You should play Patchwork 
    My unimpressive Code
    I'm sure there's a better way to do it but this is what I ended up with! Lots of list comprehensions.
    I created a boolean statement called "value" and it is always set to "true". For those of you who don't know, a boolean statement can either be "true" or "false"; you could think of it like a 1 or 0 in terms of binary. Anyways, it's set to "true" and nothing ever changes it. Next there is what is called a "while loop". A while loop runs for as long as its condition is met, in the case of my code, as long as value is "true", the while loop will run. Because value never changes, the while loop will never stop running; this is called an infinite loop. Inside of the while loop, meaning the code that the while loop is going to execute, there is a print statement that will output the string of characters "You should play Patchwork". Again, because the loop will never break, my print statement "You should play Patchwork" will continue forever until I exit out of the program.
    value = true while(value = "true") print("You should play Patchwork") 
    Thanks for reading!
    submitted by DocGerbil256 to boardgamescirclejerk [link] [comments]

    First Time Going Through Coding Interviews?

    This post draws on my personal experiences and challenges over the past term at school, which I entered with hardly any knowledge of DSA (data structures and algorithms) and problem-solving strategies. As a self-taught programmer, I was a lot more familiar and comfortable with general programming, such as object-oriented programming, than with the problem-solving skills required in DSA questions.
    This post reflects my journey throughout the term and the resources I turned to in order to quickly improve for my coding interview.
    Here're some common questions and answers
    What's the interview process like at a tech company?
    Good question. It's actually pretty different from most other companies.

    (What It's Like To Interview For A Coding Job

    First time interviewing for a tech job? Not sure what to expect? This article is for you.

    Here are the usual steps:

    1. First, you’ll do a non-technical phone screen.
    2. Then, you’ll do one or a few technical phone interviews.
    3. Finally, the last step is an onsite interview.
    Some companies also throw in a take-home code test—sometimes before the technical phone interviews, sometimes after.
    Let’s walk through each of these steps.

    The non-technical phone screen

    This first step is a quick call with a recruiter—usually just 10–20 minutes. It's very casual.
    Don’t expect technical questions. The recruiter probably won’t be a programmer.
    The main goal is to gather info about your job search. Stuff like:

    1. Your timeline. Do you need to sign an offer in the next week? Or are you trying to start your new job in three months?
    2. What’s most important to you in your next job. Great team? Flexible hours? Interesting technical challenges? Room to grow into a more senior role?
    3. What stuff you’re most interested in working on. Front end? Back end? Machine learning?
    Be honest about all this stuff—that’ll make it easier for the recruiter to get you what you want.
    One exception to that rule: If the recruiter asks you about your salary expectations on this call, best not to answer. Just say you’d rather talk about compensation after figuring out if you and the company are a good fit. This’ll put you in a better negotiating position later on.

    The technical phone interview(s)

    The next step is usually one or more hour-long technical phone interviews.
    Your interviewer will call you on the phone or tell you to join them on Skype or Google Hangouts. Make sure you can take the interview in a quiet place with a great internet connection. Consider grabbing a set of headphones with a good microphone or a bluetooth earpiece. Always test your hardware beforehand!
    The interviewer will want to watch you code in real time. Usually that means using a web-based code editor like Coderpad or collabedit. Run some practice problems in these tools ahead of time, to get used to them. Some companies will just ask you to share your screen through Google Hangouts or Skype.
    Turn off notifications on your computer before you get started—especially if you’re sharing your screen!
    Technical phone interviews usually have three parts:

    1. Beginning chitchat (5–10 minutes)
    2. Technical challenges (30–50 minutes)
    3. Your turn to ask questions (5–10 minutes)
    The beginning chitchat is half just to help your relax, and half actually part of the interview. The interviewer might ask some open-ended questions like:

    1. Tell me about yourself.
    2. Tell me about something you’ve built that you’re particularly proud of.
    3. I see this project listed on your resume—tell me more about that.
    You should be able to talk at length about the major projects listed on your resume. What went well? What didn’t? How would you do things differently now?
    Then come the technical challenges—the real meet of the interview. You’ll spend most of the interview on this. You might get one long question, or several shorter ones.
    What kind of questions can you expect? It depends.
    Startups tend to ask questions aimed towards building or debugging code. (“Write a function that takes two rectangles and figures out if they overlap.”). They’ll care more about progress than perfection.
    Larger companies will want to test your general know-how of data structures and algorithms (“Write a function that checks if a binary tree is ‘balanced’ in O(n)O(n) ↴ time.”). They’ll care more about how you solve and optimize a problem.
    With these types of questions, the most important thing is to be communicating with your interviewer throughout. You'll want to "think out loud" as you work through the problem. For more info, check out our more detailed step-by-step tips for coding interviews.
    If the role requires specific languages or frameworks, some companies will ask trivia-like questions (“In Python, what’s the ‘global interpreter lock’?”).
    After the technical questions, your interviewer will open the floor for you to ask them questions. Take some time before the interview to comb through the company’s website. Think of a few specific questions about the company or the role. This can really make you stand out.
    When you’re done, they should give you a timeframe on when you’ll hear about next steps. If all went well, you’ll either get asked to do another phone interview, or you’ll be invited to their offices for an onsite.

    The onsite interview

    An onsite interview happens in person, at the company’s office. If you’re not local, it’s common for companies to pay for a flight and hotel room for you.
    The onsite usually consists of 2–6 individual, one-on-one technical interviews (usually in a small conference room). Each interview will be about an hour and have the same basic form as a phone screen—technical questions, bookended by some chitchat at the beginning and a chance for you to ask questions at the end.
    The major difference between onsite technical interviews and phone interviews though: you’ll be coding on a whiteboard.
    This is awkward at first. No autocomplete, no debugging tools, no delete button…ugh. The good news is, after some practice you get used to it. Before your onsite, practice writing code on a whiteboard (in a pinch, a pencil and paper are fine). Some tips:

    1. Start in the top-most left corner of the whiteboard. This gives you the most room. You’ll need more space than you think.
    2. Leave a blank line between each line as you write your code. Makes it much easier to add things in later.
    3. Take an extra second to decide on your variable names. Don’t rush this part. It might seem like a waste of time, but using more descriptive variable names ultimately saves you time because it makes you less likely to get confused as you write the rest of your code.
    If a technical phone interview is a sprint, an onsite is a marathon. The day can get really long. Best to keep it open—don’t make other plans for the afternoon or evening.
    When things go well, you’ wrap-up by chatting with the CEO or some other director. This is half an interview, half the company trying to impress you. They may invite you to get drinks with the team after hours.
    All told, a long day of onsite interviews could look something like this:

    If they let you go after just a couple interviews, it’s usually a sign that they’re going to pass on you. That’s okay—it happens!
    There are are a lot of easy things you can do the day before and morning of your interview to put yourself in the best possible mindset. Check out our piece on what to do in the 24 hours before your onsite coding interview.

    The take-home code test

    Code tests aren’t ubiquitous, but they seem to be gaining in popularity. They’re far more common at startups, or places where your ability to deliver right away is more important than your ability to grow.
    You’ll receive a description of an app or service, a rough time constraint for writing your code, and a deadline for when to turn it in. The deadline is usually negotiable.
    Here's an example problem:
    Write a basic “To-Do” app. Unit test the core functionality. As a bonus, add a “reminders” feature. Try to spend no more than 8 hours on it, and send in what you have by Friday with a small write-up.
    Take a crack at the “bonus” features if they include any. At the very least, write up how you would implement it.
    If they’re hiring for people with knowledge of a particular framework, they might tell you what tech to use. Otherwise, it’ll be up to you. Use what you’re most comfortable with. You want this code to show you at your best.
    Some places will offer to pay you for your time. It's rare, but some places will even invite you to work with them in their office for a few days, as a "trial.")
    Do I need to know this "big O" stuff?
    Big O notation is the language we use for talking about the efficiency of data structures and algorithms.
    Will it come up in your interviews? Well, it depends. There are different types of interviews.
    There’s the classic algorithmic coding interview, sometimes called the “Google-style whiteboard interview.” It’s focused on data structures and algorithms (queues and stacks, binary search, etc).
    That’s what our full course prepares you for. It's how the big players interview. Google, Facebook, Amazon, Microsoft, Oracle, LinkedIn, etc.
    For startups and smaller shops, it’s a mixed bag. Most will ask at least a few algorithmic questions. But they might also include some role-specific stuff, like Java questions or SQL questions for a backend web engineer. They’ll be especially interested in your ability to ship code without much direction. You might end up doing a code test or pair-programming exercise instead of a whiteboarding session.
    To make sure you study for the right stuff, you should ask your recruiter what to expect. Send an email with a question like, “Is this interview going to cover data structures and algorithms? Or will it be more focused around coding in X language.” They’ll be happy to tell you.
    If you've never learned about data structures and algorithms, or you're feeling a little rusty, check out our Intuitive Guide to Data Structures and Algorithms.
    Which programming language should I use?
    Companies usually let you choose, in which case you should use your most comfortable language. If you know a bunch of languages, prefer one that lets you express more with fewer characters and fewer lines of code, like Python or Ruby. It keeps your whiteboard cleaner.
    Try to stick with the same language for the whole interview, but sometimes you might want to switch languages for a question. E.g., processing a file line by line will be far easier in Python than in C++.
    Sometimes, though, your interviewer will do this thing where they have a pet question that’s, for example, C-specific. If you list C on your resume, they’ll ask it.
    So keep that in mind! If you’re not confident with a language, make that clear on your resume. Put your less-strong languages under a header like ‘Working Knowledge.’
    What should I wear?
    A good rule of thumb is to dress a tiny step above what people normally wear to the office. For most west coast tech companies, the standard digs are just jeans and a t-shirt. Ask your recruiter what the office is like if you’re worried about being too casual.
    Should I send a thank-you note?
    Thank-you notes are nice, but they aren’t really expected. Be casual if you send one. No need for a hand-calligraphed note on fancy stationery. Opt for a short email to your recruiter or the hiring manager. Thank them for helping you through the process, and ask them to relay your thanks to your interviewers.
    1) Coding Interview Tips
    How to get better at technical interviews without practicing
    Chitchat like a pro.
    Before diving into code, most interviewers like to chitchat about your background. They're looking for:

    You should have at least one:

    Nerd out about stuff. Show you're proud of what you've done, you're amped about what they're doing, and you have opinions about languages and workflows.
    Communicate.
    Once you get into the coding questions, communication is key. A candidate who needed some help along the way but communicated clearly can be even better than a candidate who breezed through the question.
    Understand what kind of problem it is. There are two types of problems:

    1. Coding. The interviewer wants to see you write clean, efficient code for a problem.
    2. Chitchat. The interviewer just wants you to talk about something. These questions are often either (1) high-level system design ("How would you build a Twitter clone?") or (2) trivia ("What is hoisting in Javascript?"). Sometimes the trivia is a lead-in for a "real" question e.g., "How quickly can we sort a list of integers? Good, now suppose instead of integers we had . . ."
    If you start writing code and the interviewer just wanted a quick chitchat answer before moving on to the "real" question, they'll get frustrated. Just ask, "Should we write code for this?"
    Make it feel like you're on a team. The interviewer wants to know what it feels like to work through a problem with you, so make the interview feel collaborative. Use "we" instead of "I," as in, "If we did a breadth-first search we'd get an answer in O(n)O(n) time." If you get to choose between coding on paper and coding on a whiteboard, always choose the whiteboard. That way you'll be situated next to the interviewer, facing the problem (rather than across from her at a table).
    Think out loud. Seriously. Say, "Let's try doing it this way—not sure yet if it'll work." If you're stuck, just say what you're thinking. Say what might work. Say what you thought could work and why it doesn't work. This also goes for trivial chitchat questions. When asked to explain Javascript closures, "It's something to do with scope and putting stuff in a function" will probably get you 90% credit.
    Say you don't know. If you're touching on a fact (e.g., language-specific trivia, a hairy bit of runtime analysis), don't try to appear to know something you don't. Instead, say "I'm not sure, but I'd guess $thing, because...". The because can involve ruling out other options by showing they have nonsensical implications, or pulling examples from other languages or other problems.
    Slow the eff down. Don't confidently blurt out an answer right away. If it's right you'll still have to explain it, and if it's wrong you'll seem reckless. You don't win anything for speed and you're more likely to annoy your interviewer by cutting her off or appearing to jump to conclusions.
    Get unstuck.
    Sometimes you'll get stuck. Relax. It doesn't mean you've failed. Keep in mind that the interviewer usually cares more about your ability to cleverly poke the problem from a few different angles than your ability to stumble into the correct answer. When hope seems lost, keep poking.
    Draw pictures. Don't waste time trying to think in your head—think on the board. Draw a couple different test inputs. Draw how you would get the desired output by hand. Then think about translating your approach into code.
    Solve a simpler version of the problem. Not sure how to find the 4th largest item in the set? Think about how to find the 1st largest item and see if you can adapt that approach.
    Write a naive, inefficient solution and optimize it later. Use brute force. Do whatever it takes to get some kind of answer.
    Think out loud more. Say what you know. Say what you thought might work and why it won't work. You might realize it actually does work, or a modified version does. Or you might get a hint.
    Wait for a hint. Don't stare at your interviewer expectantly, but do take a brief second to "think"—your interviewer might have already decided to give you a hint and is just waiting to avoid interrupting.
    Think about the bounds on space and runtime. If you're not sure if you can optimize your solution, think about it out loud. For example:

    Get your thoughts down.
    It's easy to trip over yourself. Focus on getting your thoughts down first and worry about the details at the end.
    Call a helper function and keep moving. If you can't immediately think of how to implement some part of your algorithm, big or small, just skip over it. Write a call to a reasonably-named helper function, say "this will do X" and keep going. If the helper function is trivial, you might even get away with never implementing it.
    Don't worry about syntax. Just breeze through it. Revert to English if you have to. Just say you'll get back to it.
    Leave yourself plenty of room. You may need to add code or notes in between lines later. Start at the top of the board and leave a blank line between each line.
    Save off-by-one checking for the end. Don't worry about whether your for loop should have "<<" or "<=<=." Write a checkmark to remind yourself to check it at the end. Just get the general algorithm down.
    Use descriptive variable names. This will take time, but it will prevent you from losing track of what your code is doing. Use names_to_phone_numbers instead of nums. Imply the type in the name. Functions returning booleans should start with "is_*". Vars that hold a list should end with "s." Choose standards that make sense to you and stick with them.
    Clean up when you're done.
    Walk through your solution by hand, out loud, with an example input. Actually write down what values the variables hold as the program is running—you don't win any brownie points for doing it in your head. This'll help you find bugs and clear up confusion your interviewer might have about what you're doing.
    Look for off-by-one errors. Should your for loop use a "<=<=" instead of a "<<"?
    Test edge cases. These might include empty sets, single-item sets, or negative numbers. Bonus: mention unit tests!
    Don't be boring. Some interviewers won't care about these cleanup steps. If you're unsure, say something like, "Then I'd usually check the code against some edge cases—should we do that next?"
    Practice.
    In the end, there's no substitute for running practice questions.
    Actually write code with pen and paper. Be honest with yourself. It'll probably feel awkward at first. Good. You want to get over that awkwardness now so you're not fumbling when it's time for the real interview.

    2) Tricks For Getting Unstuck During a Coding Interview
    Getting stuck during a coding interview is rough.
    If you weren’t in an interview, you might take a break or ask Google for help. But the clock is ticking, and you don’t have Google.
    You just have an empty whiteboard, a smelly marker, and an interviewer who’s looking at you expectantly. And all you can think about is how stuck you are.
    You need a lifeline for these moments—like a little box that says “In Case of Emergency, Break Glass.”
    Inside that glass box? A list of tricks for getting unstuck. Here’s that list of tricks.
    When you’re stuck on getting started
    1) Write a sample input on the whiteboard and turn it into the correct output "by hand." Notice the process you use. Look for patterns, and think about how to implement your process in code.
    Trying to reverse a string? Write “hello” on the board. Reverse it “by hand”—draw arrows from each character’s current position to its desired position.
    Notice the pattern: it looks like we’re swapping pairs of characters, starting from the outside and moving in. Now we’re halfway to an algorithm.
    2) Solve a simpler version of the problem. Remove or simplify one of the requirements of the problem. Once you have a solution, see if you can adapt that approach for the original question.
    Trying to find the k-largest element in a set? Walk through finding the largest element, then the second largest, then the third largest. Generalizing from there to find the k-largest isn’t so bad.
    3) Start with an inefficient solution. Even if it feels stupidly inefficient, it’s often helpful to start with something that’ll return the right answer. From there, you just have to optimize your solution. Explain to your interviewer that this is only your first idea, and that you suspect there are faster solutions.
    Suppose you were given two lists of sorted numbers and asked to find the median of both lists combined. It’s messy, but you could simply:

    1. Concatenate the arrays together into a new array.
    2. Sort the new array.
    3. Return the value at the middle index.
    Notice that you could’ve also arrived at this algorithm by using trick (2): Solve a simpler version of the problem. “How would I find the median of one sorted list of numbers? Just grab the item at the middle index. Now, can I adapt that approach for getting the median of two sorted lists?”
    When you’re stuck on finding optimizations
    1) Look for repeat work. If your current solution goes through the same data multiple times, you’re doing unnecessary repeat work. See if you can save time by looking through the data just once.
    Say that inside one of your loops, there’s a brute-force operation to find an element in an array. You’re repeatedly looking through items that you don’t have to. Instead, you could convert the array to a lookup table to dramatically improve your runtime.
    2) Look for hints in the specifics of the problem. Is the input array sorted? Is the binary tree balanced? Details like this can carry huge hints about the solution. If it didn’t matter, your interviewer wouldn’t have brought it up. It’s a strong sign that the best solution to the problem exploits it.
    Suppose you’re asked to find the first occurrence of a number in a sorted array. The fact that the array is sorted is a strong hint—take advantage of that fact by using a binary search.

    Sometimes interviewers leave the question deliberately vague because they want you to ask questions to unearth these important tidbits of context. So ask some questions at the beginning of the problem.
    3) Throw some data structures at the problem. Can you save time by using the fast lookups of a hash table? Can you express the relationships between data points as a graph? Look at the requirements of the problem and ask yourself if there’s a data structure that has those properties.
    4) Establish bounds on space and runtime. Think out loud about the parameters of the problem. Try to get a sense for how fast your algorithm could possibly be:

    When All Else Fails
    1) Make it clear where you are. State what you know, what you’re trying to do, and highlight the gap between the two. The clearer you are in expressing exactly where you’re stuck, the easier it is for your interviewer to help you.
    2) Pay attention to your interviewer. If she asks a question about something you just said, there’s probably a hint buried in there. Don’t worry about losing your train of thought—drop what you’re doing and dig into her question.
    Relax. You’re supposed to get stuck.
    Interviewers choose hard problems on purpose. They want to see how you poke at a problem you don’t immediately know how to solve.
    Seriously. If you don’t get stuck and just breeze through the problem, your interviewer’s evaluation might just say “Didn’t get a good read on candidate’s problem-solving process—maybe she’d already seen this interview question before?”
    On the other hand, if you do get stuck, use one of these tricks to get unstuck, and communicate clearly with your interviewer throughout...that’s how you get an evaluation like, “Great problem-solving skills. Hire.”

    3) Fixing Impostor Syndrome in Coding Interviews
    “It's a fluke that I got this job interview...”
    “I studied for weeks, but I’m still not prepared...”
    “I’m not actually good at this. They’re going to see right through me...”
    If any of these thoughts resonate with you, you're not alone. They are so common they have a name: impostor syndrome.
    It’s that feeling like you’re on the verge of being exposed for what you really are—an impostor. A fraud.
    Impostor syndrome is like kryptonite to coding interviews. It makes you give up and go silent.
    You might stop asking clarifying questions because you’re afraid they’ll sound too basic. Or you might neglect to think out loud at the whiteboard, fearing you’ll say something wrong and sound incompetent.
    You know you should speak up, but the fear of looking like an impostor makes that really, really hard.
    Here’s the good news: you’re not an impostor. You just feel like an impostor because of some common cognitive biases about learning and knowledge.
    Once you understand these cognitive biases—where they come from and how they work—you can slowly fix them. You can quiet your worries about being an impostor and keep those negative thoughts from affecting your interviews.

    Everything you could know

    Here’s how impostor syndrome works.
    Software engineering is a massive field. There’s a huge universe of things you could know. Huge.
    In comparison to the vast world of things you could know, the stuff you actually know is just a tiny sliver:
    That’s the first problem. It feels like you don’t really know that much, because you only know a tiny sliver of all the stuff there is to know.

    The expanding universe

    It gets worse: counterintuitively, as you learn more, your sliver of knowledge feels like it's shrinking.
    That's because you brush up against more and more things you don’t know yet. Whole disciplines like machine learning, theory of computation, and embedded systems. Things you can't just pick up in an afternoon. Heavy bodies of knowledge that take months to understand.
    So the universe of things you could know seems to keep expanding faster and faster—much faster than your tiny sliver of knowledge is growing. It feels like you'll never be able to keep up.

    What everyone else knows

    Here's another common cognitive bias: we assume that because something is easy for us, it must be easy for everyone else. So when we look at our own skills, we assume they're not unique. But when we look at other people's skills, we notice the skills they have that we don't have.
    The result? We think everyone’s knowledge is a superset of our own:
    This makes us feel like everyone else is ahead of us. Like we're always a step behind.
    But the truth is more like this:
    There's a whole area of stuff you know that neither Aysha nor Bruno knows. An area you're probably blind to, because you're so focused on the stuff you don't know.

    We’ve all had flashes of realizing this. For me, it was seeing the back end code wizard on my team—the one that always made me feel like an impostor—spend an hour trying to center an image on a webpage.

    It's a problem of focus

    Focusing on what you don't know causes you to underestimate what you do know. And that's what causes impostor syndrome.
    By looking at the vast (and expanding) universe of things you could know, you feel like you hardly know anything.
    And by looking at what Aysha and Bruno know that you don't know, you feel like you're a step behind.
    And interviews make you really focus on what you don't know. You focus on what could go wrong. The knowledge gaps your interviewers might find. The questions you might not know how to answer.
    But remember:
    Just because Aysha and Bruno know some things you don't know, doesn't mean you don't also know things Aysha and Bruno don't know.
    And more importantly, everyone's body of knowledge is just a teeny-tiny sliver of everything they could learn. We all have gaps in our knowledge. We all have interview questions we won't be able to answer.
    You're not a step behind. You just have a lot of stuff you don't know yet. Just like everyone else.

    4) The 24 Hours Before Your Interview

    Feeling anxious? That’s normal. Your body is telling you you’re about to do something that matters.

    The twenty-four hours before your onsite are about finding ways to maximize your performance. Ideally, you wanna be having one of those days, where elegant code flows effortlessly from your fingertips, and bugs dare not speak your name for fear you'll squash them.
    You need to get your mind and body in The Zone™ before you interview, and we've got some simple suggestions to help.
    5) Why You're Hitting Dead Ends In Whiteboard Interviews

    The coding interview is like a maze

    Listening vs. holding your train of thought

    Finally! After a while of shooting in the dark and frantically fiddling with sample inputs on the whiteboard, you've came up with an algorithm for solving the coding question your interviewer gave you.
    Whew. Such a relief to have a clear path forward. To not be flailing anymore.
    Now you're cruising, getting ready to code up your solution.
    When suddenly, your interviewer throws you a curve ball.
    "What if we thought of the problem this way?"
    You feel a tension we've all felt during the coding interview:
    "Try to listen to what they're saying...but don't lose your train of thought...ugh, I can't do both!"
    This is a make-or-break moment in the coding interview. And so many people get it wrong.
    Most candidates end up only half understanding what their interviewer is saying. Because they're only half listening. Because they're desperately clinging to their train of thought.
    And it's easy to see why. For many of us, completely losing track of what we're doing is one of our biggest coding interview fears. So we devote half of our mental energy to clinging to our train of thought.
    To understand why that's so wrong, we need to understand the difference between what we see during the coding interview and what our interviewer sees.

    The programming interview maze

    Working on a coding interview question is like walking through a giant maze.
    You don't know anything about the shape of the maze until you start wandering around it. You might know vaguely where the solution is, but you don't know how to get there.
    As you wander through the maze, you might find a promising path (an approach, a way to break down the problem). You might follow that path for a bit.
    Suddenly, your interviewer suggests a different path:
    But from what you can see so far of the maze, your approach has already gotten you halfway there! Losing your place on your current path would mean a huge step backwards. Or so it seems.
    That's why people hold onto their train of thought instead of listening to their interviewer. Because from what they can see, it looks like they're getting somewhere!
    But here's the thing: your interviewer knows the whole maze. They've asked this question 100 times.

    I'm not exaggerating: if you interview candidates for a year, you can easily end up asking the same question over 100 times.
    So if your interviewer is suggesting a certain path, you can bet it leads to an answer.
    And your seemingly great path? There's probably a dead end just ahead that you haven't seen yet:
    Or it could just be a much longer route to a solution than you think it is. That actually happens pretty often—there's an answer there, but it's more complicated than you think.

    Hitting a dead end is okay. Failing to listen is not.

    Your interviewer probably won't fault you for going down the wrong path at first. They've seen really smart engineers do the same thing. They understand it's because you only have a partial view of the maze.
    They might have let you go down the wrong path for a bit to see if you could keep your thinking organized without help. But now they want to rush you through the part where you discover the dead end and double back. Not because they don't believe you can manage it yourself. But because they want to make sure you have enough time to finish the question.
    But here's something they will fault you for: failing to listen to them. Nobody wants to work with an engineer who doesn't listen.
    So when you find yourself in that crucial coding interview moment, when you're torn between holding your train of thought and considering the idea your interviewer is suggesting...remember this:
    Listening to your interviewer is the most important thing.
    Take what they're saying and run with it. Think of the next steps that follow from what they're saying.
    Even if it means completely leaving behind the path you were on. Trust the route your interviewer is pointing you down.
    Because they can see the whole maze.
    6) How To Get The Most Out Of Your Coding Interview Practice Sessions
    When you start practicing for coding interviews, there’s a lot to cover. You’ll naturally wanna brush up on technical questions. But how you practice those questions will make a big difference in how well you’re prepared.
    Here’re a few tips to make sure you get the most out of your practice sessions.
    Track your weak spots
    One of the hardest parts of practicing is knowing what to practice. Tracking what you struggle with helps answer that question.
    So grab a fresh notebook. After each question, look back and ask yourself, “What did I get wrong about this problem at first?” Take the time to write down one or two things you got stuck on, and what helped you figure them out. Compare these notes to our tips for getting unstuck.
    After each full practice session, read through your entire running list. Read it at the beginning of each practice session too. This’ll add a nice layer of rigor to your practice, so you’re really internalizing the lessons you’re learning.
    Use an actual whiteboard
    Coding on a whiteboard is awkward at first. You have to write out every single character, and you can’t easily insert or delete blocks of code.
    Use your practice sessions to iron out that awkwardness. Run a few problems on a piece of paper or, if you can, a real whiteboard. A few helpful tips for handwriting code:

    Set a timer
    Get a feel for the time pressure of an actual interview. You should be able to finish a problem in 30–45 minutes, including debugging your code at the end.
    If you’re just starting out and the timer adds too much stress, put this technique on the shelf. Add it in later as you start to get more comfortable with solving problems.
    Think out loud
    Like writing code on a whiteboard, this is an acquired skill. It feels awkward at first. But your interviewer will expect you to think out loud during the interview, so you gotta power through that awkwardness.
    A good trick to get used to talking out loud: Grab a buddy. Another engineer would be great, but you can also do this with a non-technical friend.
    Have your buddy sit in while you talk through a problem. Better yet—try loading up one of our questions on an iPad and giving that to your buddy to use as a script!
    Set aside a specific time of day to practice.
    Give yourself an hour each day to practice. Commit to practicing around the same time, like after you eat dinner. This helps you form a stickier habit of practicing.
    Prefer small, daily doses of practice to doing big cram sessions every once in a while. Distributing your practice sessions helps you learn more with less time and effort in the long run.
    part -2 will be upcoming in another post !
    submitted by Cyberrockz to u/Cyberrockz [link] [comments]

    what is this i just downloaded (youtube code?)

    so this is kinda a wierd story. I was planning to restart my computer. (cant remember why) I spend most of my time watching youtube videos so i had alot of tabs open. So i was watching the videos then deleting the tab but not opening new tabs. So i was down 2 i think 1 it was a pretty long video so i tried to open a youtube home page tab just to look while i listened to the video. And this is a short exerp of what i got.





    YouTube











    submitted by inhuman7773 to techsupport [link] [comments]

    Java Reading a CSV File Tutorial - YouTube Java Swing Tutorials (GUI) - Read and Write JList with ... read json object using javascript and ajax Create/Read/Write File using File Server: Node.js Java Serializable interface: Reading and writing Objects ... Java Programming Tutorial - 81 - Reading from Files - YouTube JavaScript FileReader - YouTube

    * - success: an optional function invoked as soon as the whole file has been: read successfully * - binary: If true chunks will be read through FileReader.readAsArrayBuffer * otherwise as FileReader.readAsText. Default is false. * - chunk_size: The chunk size to be used, in bytes. Default is 64K. */ In node.js these are one library fs (File-System) which is used to manage all read and write operations. By using the fs module we can read and write files in both synchronous and asynchronous way. There are many ways in which we can read and write data to file. Lets have a look on each of them one by one. JavaScript Read and Write to Text File i tried ur example code and it worked.. i need some modification in dat..rite nw u r giving the user to browse for a file and based on the selected file the operation is performed.. but i want my script to always read from the same file..der is no option to browse for the file.when the page loads this function shud be called automatically and the hardcoded file shud be read. can u help me out ... When we read the file as a text string, we will get all its text content and parse it. For example, a user may drop a new CSS file, and you can apply it to your page straight away. Or, if it’s a file with an article that should be published, you can read it, apply some styles to the titles and place it in the text editor on your page. How do I encode/load/process the file to avoid the encoding issues, so the file being received in the HTTP request is the same as the file before it was uploaded. Some other possibly useful information, if instead of using fileReader.readAsBinaryString() I use fileObject.getAsBinary() to get the binary data, it works fine. FileReader includes four options for reading a file, asynchronously: - FileReader.readAsBinaryString(BlobFile) – The result property will contain the file/blob’s data as a binary string. Every byte is represented by an integer in the range [0..255]. - FileReader.readAsText(BlobFile, opt_encoding) – The result property will contain the file/blob’s data as a text string. By default the ... Summary. File objects inherit from Blob.. In addition to Blob methods and properties, File objects also have name and lastModified properties, plus the internal ability to read from filesystem. We usually get File objects from user input, like <input> or Drag’n’Drop events (ondragend).. FileReader objects can read from a file or a blob, in one of three formats: Read file metadata # The File object contains a number of metadata properties about the file. Most browsers provide the file name, the size of the file, and the MIME type, though depending on the platform, different browsers may provide different, or additional information. Given a text file, write a JavaScript program to extract the contents of that file. There is a built-in Module or in-built library in NodeJs which handles all the reading operations called fs (File-System). It is basically a JavaScript program (fs.js) where function for reading operations is written. Import fs-module in the program and use functions to read text from the files in the system. In order to read a file chosen by the user, using a file open dialog, you can use the <input type="file"> tag. You can find information on it from MSDN. When the file is chosen you can use the FileReader API to read the contents.

    [index] [20809] [15715] [25934] [8590] [18489] [14413] [2392] [21233] [18747] [9698]

    Java Reading a CSV File Tutorial - YouTube

    In this tutorial, you will see that how we can retrieve and read JSON data using ajax. for more videos please visit http://www.yogeshdotnet.com My Youtube ge... In this video tutorial we shall learn creating a folder/directory, creating a file, reading from a file, writing to a file and copying content from one file to another. We shall also see how you ... Netbeans Tutorial - Read Write JList Data From Text FileThis video helps you to understand how can we write JList data to file and Load JList from same Text fil... Tap to unmute. If playback doesn't begin shortly, try restarting your device. You're signed out. Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid... Website - https://thenewboston.com/ GitHub - https://github.com/thenewboston-developers Reddit - https://www.reddit.com/r/thenewboston/ Twitter - https://twi... We show how to read and parse a .csv text file, using Scanner. We explain the Serializable interfaces and show how to write and read an object from a file.

    https://binaryoptiontrade.ebepvautroldade.cf