Warning: SQLite3::querySingle(): Unable to prepare statement: 1, no such table: sites in /home/admin/web/local.example.com/public_html/index.php on line 46
 API Token Binary.com

API Token Binary.com

A place to help anyone who has a uterus

This sub is dedicated to providing information and resources to those in need of services in states that have passed the heartbeat bill. Please read the info in the sidebar 💖

Anyone knows good BTC/USD binary options broker ideally with API access? /r/Bitcoin

Anyone knows good BTC/USD binary options broker ideally with API access? /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Good place to trade bitcoin binary options through API interface?

I did some searching on google, and this subreddit and didn't find much that looked up-to-date or trustworthy....
Just looking for a reputable trading site that offers binary options and has an API for access. And, yes, I know that binary trading is essentially pure speculation. :)
submitted by sigma_noise to BitcoinMarkets [link] [comments]

I'm looking for a Python API to a binary options trading platform.

I have an algorithm for binary options trading, but I don't feel like manually working a GUI to do my trades.
Could someone point me to a resource for executing my trades via Python?
submitted by metaperl to Python [link] [comments]

PCSX2 official Arch Linux package not recommended

Arch Linux's community package for the emulator PCSX2 which is on their official multilib repositories has sparked some questionable changes in the way they have compiled the binary. I chased them up about them defining OPENCL_API=ON, DISABLE_ADVANCE_SIMD=ON and EGL_API=OFF. After making some changes they have went ahead and built and distributed the 64-bit version of the emulator prematurely. Along with this, it has been brought up from the stable releases which it has always followed up until now.
With these changes as well as future unwanted changes, I would like to say that for the foreseeable future we would like to NOT recommend using the pcsx2 package in Arch Linux repositories. Instead, please use the pcsx2-git package on the AUR which is maintained by weirdbeardgame kenshen (a contributor to the project) with help from myself and others. The AUR package is much more cared for the way the emulator developers would prefer. If you would like a package which distributes a precompiled binary, please voice your opinion. If there is enough interest, we might get one going. If the package maintainer for Arch Linux's repositories reads this, please consider looking at our PKGBUILD while following it much more closely in your version and keeping your version down at the stable 1.6 release.
Thank you
EDIT: Add explanation for the SIMD build flag
EDIT-2: I want to clarify that this is in the testing repository and they haven't pushed this to the main repositories yet
submitted by JibbityJobbity to linux_gaming [link] [comments]

The first official release of the ZOIA Librarian app is now available!

Version 1.0 is now out for Windows 10, Mac OS X, and Linux (Ubuntu)! It can be downloaded here https://github.com/meanmedianmoge/zoia_lib - see the "How to Install" section.
EDIT: Mac 1.0 release has been updated (see the link above to download the zip), and it should open successfully upon double-clicking the .app file! Apologies for any inconvenience.
If you have a GitHub account, feel free to create an issue regarding any performance issues you encounter. If you don't have a GitHub account, send feedback and bugs to me at [mikebmoger@gmail.com](mailto:mikebmoger@gmail.com).
Overview and tutorial video: https://www.youtube.com/watch?v=JLOUrWtG1Pk
User Manual: https://github.com/meanmedianmoge/zoia_lib/blob/mastedocumentation/User%20Manuals/ZOIA%20Librarian%20-%20User%20Manual%20-%20Version%201.0.pdf
Changelog is below. Special thanks to our beta testers, contributors, and supporters for the interest in this application!
Patch Notes Version 1.0 (September 25, 2020)
New Features - Finalized ZOIA binary parsing implementation. Again, massive thanks to djigneo/apparent1 for the initial C# code. As of this release, all features of the patch are fully exposed and can be decoded into a JSON object for further use. - Patch visualizer has been updated with more information to help you understand patches at a quick-glance. - Added the ability to search and sort for patches by author name. This applies to Local and Bank tabs only. PS tab author search and sort will not be supported at this time due to the API structure. - Updated patch importing so that patches with near-identical names are merged upon import (instead of strictly identical names). - Updated the behavior of the SD and Bank tables so that multiples can be selected and moved in different ways: Hold Shift and click the start and end patches to move and/or Hold Ctrl/Cmd and click on each patch you'd like to move. - Patches can now be moved into a bank in the following ways: Dragging single or multiple selections (similar options as above) at once and/or Clicking the Add to Bank button for single selections at a time. - Added a Clear Bank button to wipe the bank tables clean. - Added a new Help toolbar which allows users to access documentation and useful ZOIA resources. These will display in the PS tab browser panel. You can also search for different commands/shortcuts. - Added a Reset UI menu option in the event that users mangle the UI panels or tables. - Updated the light theme colors to give it a more muted look. - Alternating row colors is now a saved preference. It will save whatever is the current setting upon closing the application. - Added a step-by-step guide for how to compile the application from source for developers, contributors or users who were unable to open the beta builds. - Added our first Linux build! We aim to support the latest stable version of Ubuntu going forward. If you are a Linux user who prefers other distributions, please contact me.
Fixes - Fixed an issue that occurred while importing a version history (Mac). - Removed the threads used with menu action multi-import functions (Mac temporary fix). - Fixed an issue where the dates of imported patches were back-dated to the history of the SD card. - Fixed an issue with SD card imported files having mangled filenames (Windows). This also caused patches to not export properly. - Fixed an issue where changing the font/font size didn't apply to themes or buttons.
Known Issues - Certain patch binaries cannot be fully decoded due to being saved on deprecated ZOIA firmware. - Saved UI preferences are not being applied correctly for the Local Storage tab - specifically the vertical splitter (Mac).
Future Plans - Expansion view of routing for patch visualizer. Right now, the connections are displayed on a module-block level, but not from a general patch level. The expander would provide an in-depth visualization of audio and CV routing, likely to be displayed in a new tab. - Extend the binary decoder methods into an API for other applications/programs to utilize. - Simplify and automate code structure for releases (currently, a minimal-working version of the code needs to be created for the app-building process). - Allow for custom themes/colors in the UI. - Actually fix threading issues associated with menu action multi-imports.
As always, we welcome any feedback you may have. Thanks for being awesome :) - Mike M.
submitted by meanmedianmoge to ZOIA [link] [comments]

I created a mathematically optimal team generator!

Hi all,

I've been playing FPL for a few years now, and by no means am I an expert. However, I like math and particularly optimization problems. And a few days ago I thought to use my math knowledge for something useful.

My goal was to start from some metric that predicts the amount of points a player will score (either in the next gameweek, or over the whole season). From that metric, I wanted to generate the mathematically optimal team, aka choose the 15 players that will give me the most points, while staying within budget. I realized this is a constrained knapsack problem, which can be solved by dedicated solvers as long as the optimization problem is properly defined. Note that while I make a big assumption by choosing some metric from which I start, the solver actually finds the most optimal team, without any prior assumptions about best formation, budget spread, etc!

(Warning: from this point onward it gets kinda math-y, so turn back or skip ahead to the results if that's not your thing)


So first, the optimization variable needed to be defined. For this purpose I introduced a binary variable x which is basically a vector of all players in the game, where a value of 1 indicates that player is part of our dream team and a 0 means it's not.

Secondly, an objective function needs to be defined, which is what we want to maximize. In our case, this is the total expected points our dreamteam will score. I included double captain points and reduced points for bench players here. The objective function is linear, which is nice since it is convex (an important property which makes solving the problem much easier, and is even required for most solvers).

Lastly are the constraints. Obviously, there is the 100M budget constraint. Then we also want the required amount of goalkeepers, defenders, midfielders and forwards. Then we need to keep in mind the formation constraints, and lastly are the max 3 players per club constraints. Luckily, these are all linear (so convex) constraints.

I solved this problem using CVX for MATLAB, particularly with the Gurobi solver since it allows mixed integer programs. It tries to find the optimal variable x* which maximizes the objective function while staying within the constraints. And amazingly, it actually comes up with solutions!

So like I said before, I need to start from some metric that indicates how many points a player will score (if you have any recommendations, let me know!). For a lack of better options, I chose two different metrics:

  1. The total points scored by the player last year
  2. The expected points scored by the player in the next gameweek (ep_next in the FPL API, for fellow nerds)

Obviously, both metrics are not perfect. The first one doesn't take into account transfers, promoted teams, injuries, fixtures, position changes etc. However, it should work decent for making a set-and-forget team with proven PL players.

The second metric seems to have a problem with overrating bench players of top PL teams such as Ozil, Minamino, etc. I'm not really sure why, but it's a metric taken directly from FPL with undisclosed underlying math so it's not my problem. Also, keep in mind that since the first gameweek does not feature City/Utd/Burnley/Villa players, this metric predicts them to score 0 points so they won't feature in the optimal team.

Team 1: Last year's dreamteam

Team 2: Next week's dreamteam

Both teams cost exactly 100M.

At first glance, there are some obvious flaws with both teams, but most of them are because the metric used as input is flawed, as I explained before. Lundstram is obviously a much worse choice this year due to various reasons, and Team 2 has some top 6 players which are very much not nailed.

However. What I think is interesting is that both teams have only 2 starting midfielders. This despite the trend of people stacking premium midfielders. On the other hand, premium defenders seem to be very good value, and the importance of TAA and Robertson is underlined. Similarly, near-premium forwards in the 7.5-10 price range seem to be a good choice.

I'm quite content with my optimal team generator. Using it, I don't need to use vague value metrics such as VAPM. The input can be any metric which relates simply to how many points a player will score. Choices about relative value of e.g. defenders against midfielders, formation, budget spread etc. are all taken out of my hands with this team generator. The team that is generated is only as good as the metric used as input. But given a certain input metric, you can be sure that the generated team is optimal.

I would gladly share my MATLAB code if there is any interest. Also, I'm open to suggestions on how to extend it. EDIT: Here it is.

(Tiny disclaimer: Remember when I said: "without any prior assumptions"? That is a lie. There is one tiny assumption I made, which is how often bench players are subbed on. I guesstimated this to happen approximately 10% of the time.)
submitted by nectri42 to FantasyPL [link] [comments]

Gridcoin "Fern" Release

Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.



Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.


Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.


The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog



Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.







As a reminder:









Detailed Changelog

[] 2020-09-03, mandatory, "Fern"





submitted by jamescowens to gridcoin [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration


As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.


The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.


A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)


submitted by vv224 to linux_gaming [link] [comments]

Study plan for MS-500: Measured skills + Microsoft Docs + some labs + test exams?

Hi all!
Would like your input on my MS-500 study plan. Decided to do Exam MS-500: Microsoft 365 Security Administration.
I have 2-3 years of M365 experience. Most of the stuff covered in MS-500 I have at least poked around in, some of the stuff I work with daily. Reading the Skills Measured presented nothing I at least knew existed. So, I decided to do a study plan.

My MS-500 Study Plan
  1. Add links to Microsoft Docs to the Measured Skills topics (see example of the poor mans study guide below)
  2. Do some labs on the stuff that are new or needs a refresh
  3. Test exams (Fastlane seems to have a free one)
  4. Take the exam
  5. Pass it 🍻

Any holes in my plan?
One concern I do have is that Docs are updated, but there is perhaps too much information not covered by the exam? (I do not have a lot of time, so my study must be effective but also learn for real life, no brain dumps.)
Thanks! :)

Example on how to add links from the measured skills document

Implement and manage identity and access (30-35%)

Secure Microsoft 365 hybrid environments
submitted by Lefty4444 to Office365 [link] [comments]

How to deploy Angular 2 application on AWS? Need help regarding CI/CD and scaling

Hi guys, I am developing internal system for my organisation on Angular 9.0.6. This is currently deployed on Lambda function via serverless. But this setup has multiple problems that I am facing currently:
1st - CI/CD setup: Our repository is hosted on Gitlab. I was trying to use Gitlab CI tool to deploy my code to staging/production. But it gets stuck at
Serverless: Excluding development dependencies... 
This stage takes almost 45 minutes of build time and then times out. My package exclusion in serverless.yml:
package: exclude: \- src/\*\* \- node\_modules/\*\* \- firebug-lite/\*\* \- e2e/\*\* \- coverage/\*\* \- '!node\_modules/aws-serverless-express/\*\*' \- '!node\_modules/binary-case/\*\*' \- '!node\_modules/type-is/\*\*' \- '!node\_modules/media-type\*\*' \- '!node\_modules/mime-types/\*\*' \- '!node\_modules/mime-db/\*\*' 
Where am I going wrong? Should I be looking at AWS CodeBuild or any other tool?
PS: I also evaluated Jenkins as an option, but the entire JAVA backend microservices are getting deployed via Gitlab CI so a Jenkins setup won't add much value.
2nd - Upgrading Angular version:
The other problem is that when I upgrade my code via angular-cli or otherwise also, Lambda function is returning 502 on the main chunk. It is loading all supporting bundles (eg, vendor.js, polyfills.js) correctly. I tried checking my CloudWatch logs with enhanced monitoring enabled. But there is no error corresponding to this.
Everything is working fine and compiling without issues on local server with and without AOT and production build flags.
Anyone having faced a similar issue to this?
I encountered same issue when adding ckeditockeditor5-angular library in my package.
My package dependencies are:
"dependencies": { "@angulaanimations": "~9.0.6", "@angulacdk": "~9.1.3", "@angulacommon": "~9.0.6", "@angulacompiler": "~9.0.6", "@angulacore": "~9.0.6", "@angulaforms": "~9.0.6", "@angulamaterial": "^9.1.3", "@angulaplatform-browser": "~9.0.6", "@angulaplatform-browser-dynamic": "~9.0.6", "@angularouter": "~9.0.6", "@angulaservice-worker": "~9.0.6", "@ckeditockeditor5-angular": "^1.2.3", "@ckeditockeditor5-build-classic": "^22.0.0", "@fullstory/browser": "^1.4.3", "@ng-toolkit/serverless": "^8.1.0", "@sentry/browser": "^5.12.1", "@sentry/fullstory": "^1.1.2", "@swimlane/ngx-charts": "^13.0.2", "@zxing/ngx-scanner": "^3.0.0", "apollo-angular": "^1.8.0", "apollo-angular-link-http": "^1.9.0", "apollo-cache-inmemory": "^1.6.0", "apollo-client": "^2.6.0", "apollo-link": "^1.2.11", "apollo-link-context": "^1.0.20", "apollo-link-error": "^1.1.13", "apollo-link-ws": "^1.0.19", "apollo-utilities": "^1.3.3", "aws-serverless-express": "^3.3.6", "bootstrap": "^4.4.1", "cors": "^2.8.5", "dexie": "^3.0.2", "graphql": "^14.5.0", "graphql-tag": "^2.10.0", "jwt-decode": "^2.2.0", "moment": "^2.25.1", "ng2-pdfjs-viewer": "^5.0.5", "ngx-device-detector": "^1.3.20", "ngx-kjua": "^1.7.0", "ngx-mat-daterange-picker": "^1.1.4", "rxjs": "~6.5.4", "serverless-api-compression": "^1.0.1", "subscriptions-transport-ws": "^0.9.16", "tslib": "^1.10.0", "zone.js": "~0.10.2" }, "devDependencies": { "@angular-devkit/build-angular": "~0.900.6", "@angulacli": "^9.0.6", "@angulacompiler-cli": "~9.0.6", "@angulalanguage-service": "~9.0.6", "@types/jasmine": "~3.3.8", "@types/jasminewd2": "~2.0.3", "@types/node": "^8.10.59", "codelyzer": "^5.0.0", "jasmine-core": "~3.4.0", "jasmine-spec-reporter": "~4.2.1", "karma": "^4.4.1", "karma-chrome-launcher": "~2.2.0", "karma-coverage-istanbul-reporter": "~2.0.1", "karma-jasmine": "~2.0.1", "karma-jasmine-html-reporter": "^1.4.0", "opencollective": "^1.0.3", "protractor": "~5.4.0", "serverless": "^1.60.0", "serverless-apigw-binary": "^0.4.4", "ts-loader": "^6.2.1", "ts-node": "~7.0.0", "tslint": "~5.15.0", "typescript": "~3.7.5", "webpack-cli": "^3.3.10" } 
I don't have much experience in Lambda setups. Any specific place where I should be looking at to debug this issue?
submitted by shreeshkatyayan to Angular2 [link] [comments]

How to name sync & async items?

How should I organize parallel sets of synchronous and asynchronous modules, structs, and functions?
  1. This doesn't compile:
    pub mod async; // keyword, no good pub mod sync; 
    I considered async_ and r#async but don't want to get punched.
  2. sync in std::sync means "synchronization" not "synchronous" so maybe that's not the best?
  3. Should I make default methods synchronous and add a suffix for async ones: open() and open_async()? (Async is the cool stuff, I don't like giving it the crappier name...)
  4. I've been suggested to make the async code the default and hide the sync stuff in a module.
    async fn open() -> io::Result; mod blocking { fn open() -> io::Result; } 
Other ideas? Are there any popular libraries that do both sync and async?
submitted by jkugelman to rust [link] [comments]

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

Azure App Service with SignalR (Not Azure SignalR)

I'm struggling getting SignalR working on an Azure App Service. The API uses .Net Core 3.1
When developing locally, I can successfully connect to the Hub via Websockets - after deplyoing to Azure, it falls back to use ServerSentEvents.
When checking the Network tab of chrome, I saw this response from Azure:
availableTransports: [
0: {transport: "ServerSentEvents", transferFormats: ["Text"]}
1: {transport: "LongPolling", transferFormats: ["Text", "Binary"]}
I did enable ARR Affinity and Websockets but it still looks like it's blocking Websockets.
I tried using skipNegotiation and trying to force Azure to use Websockets but no success.
Has anyone been able to use SignalR on an Azure App Service - App with Websockets instead of SSE/LongPolling?
There doesn't seem to be an option to open ports or anything. I'm out of ideas :/
submitted by zerawk to dotnet [link] [comments]

My Beginners Guide to Choosing the Right Server

I am here to tell you the difference between commonly used Minecraft Server Variations (Paper), Minecraft APIs (Bukkit), and server connectors (Bungee).
Minecraft Servers:

  1. Vanilla: Vanilla is the official server that is downloadable on Mojang’s website. This is officially supported and endorsed by Mojang. There are no modifications and is almost the same as a constantly running LAN game.
  2. CraftBukkit: CraftBukkit is the original modified Minecraft server. It has the ability to run plugins and is a modification of Minecraft’s code. CraftBukkit was partially shut down due to copyright issues with Mojang. Now the only way to get it is through BuildTools by Spigot. {Uses Bukkit API}
  3. Spigot: A fork of CraftBukkit. Spigot was created to do right to the things CraftBukkit did wrong. Spigot is one of the most popular servers to run. It has optimizations, more settings, and solved the legal issue that CraftBukkit had. This was done through BuildTools. It basically is the officials server but more complicated to that the binary is not the same. This means that you have to compile the server on your own, though it really only requires a few clicks. Spigot also has an extended API. This allows many more plugin options and more possibilities. {Uses Spigot API}
  4. Paper: A fork of Spigot. Spigot is getting slower and people don’t like running BuildTools. There are also many unfixed exploits and bugs in the Spigot code. That is where Paper comes int. As a fork of Spigot, it can also run plugins. Paper also has it’s own extended API of Spigot API. This adds even more features. Paper is considered by many the best Minecraft server to run due to the fact that it is faster than Spigot, has more options than Spigot, and has many bug fixes that Spigot doesn’t have. Another plus of Paper is that there is no BuildTools. You just download and run the jar. They have already compiled the code for you meaning you just put it in a folder and run it. {Uses Paper API}
  5. Forge: A server for Mods. Forge is a modded server. It also has a modded client. Plugins differ from mods because mods require client side modifications. Basically you can join a server with plugins from any Minecraft client but you can only join a Forge server from a modded client. The advantage of mods is that it completely changes Minecraft. Mods have the ability to edit practically every line of Minecraft’s code making it the most customizable out of all of the servers. However, it requires a modded client and requires stronger hardware. {Uses Forge}
  6. Sponge: Sponge was made to be an optimized forge server. In other words what Sponge is to Forge is what Spigot was to CraftBukkit. Sponge uses forge in the server but provides many optimizations and bug fixes. The only downfall is that it lost support after 1.12 likely due to Minecraft practically being rewritten and the developers not being able to keep up {Uses Sponge API}
Next we have the API that allows plugins to run. Note: These won’t work on Vanilla and only work on certain servers:

  1. Bukkit API: The original plugin API. Bukkit API is the base for most of the Modern APIs. Bukkit provides many features. If you are developing a plugin, you are likely using some version of this. It is integrated into CraftBukkit.
  2. Spigot API: The popular fork of Bukkit API. Spigot API is a better version of Bukkit API and it has additional features. This is the most commonly used API. It is integrated into Spigot.
  3. Paper API: A fork of Spigot API. Paper API is Spigot API with more classes. Paper API would be the best API but a majority of servers are still on Spigot. It is integrated into Paper.
  4. Sponge API: Sponge API is a standalone API for developing plugins that work with Sponge. You will likely only use this if you have a Sponge server.
  5. Forge: Forge isn’t exactly an API, but it is a platform for creating mods. Forge is used by Forge and is in Forge and Sponge.
Lastly, we have the Reverse Proxy Server Connectors. These allow you to create a network of multiple servers while it still looks like 1 server. Note: Most of these will work with almost any of the Server Variations.

  1. BungeeCord: BungeeCord is one of the most popular Reverse Proxy Server Connectors. BungeeCord was created by and is maintained by the Spigot Team. BungeeCord has direct support with Spigot servers.
  2. Waterfall: Waterfall is a fork of Bungee Cord. Waterfall provides more features and customization than BungeeCord which is why it is so popular. Waterfall was created by and is maintained by PaperMC. Waterfall is designed to work with Paper.
  3. Others: There are many other forks of Spigot/Waterfall and I am too lazy to look them up. If you would like to please mention them and their features in the comments
That’s that. This is my beginners guide to Minecraft Server Variations, APIs, and Reverse Proxies.
Finally, I know what you are thinking. WHERE IS FABRIC IN THERE!
Well I don’t know or use fabric so I can’t talk much about it. There are many people more qualified that me to describe fabric. If you know fabric comment it and what it does so more people can understand it.
Also, just to point out I spent 1.5 HOURS typing this so please don’t be rude if I messed up or forgot something.
submitted by I-Is-crouton to admincraft [link] [comments]

Immutability on Synology NAS using Minio

I'm scoping out a project where we'll be placing a couple of Synology NAS's as backup repositories. I've read about using Minio inside of a container (Docker) running on the NAS and have a theory that we could use that setup to leverage data immutability using local storage for those clients that are cloud adverse. Just curious if anyone has tried such a thing yet. In this case, initial backups would go to a NAS at the primary location, and then a second NAS at the recovery site would be where we'd have immutability enabled that would server as the copy job target. I'm new to Docker and Minio, but the theory seems to be sound....curious if anyone else has tried it. The alternative I guess would be to run Minio as a VM and have it connect to the NAS instead of running it ON the NAS.
Native S3 compatibility would be great as well on the Synology, but I've not seen anything about it....looks like we have to jump a couple of hoops for that to happen. Thoughts?
Also, I'm sure I saw somewhere a list of services that are fully and partially S3 compatible for immutability, but I can't seem to find it now. Pretty sure it was on Reddit. If anyone else happened to bookmark it, I'd like to have another look. Last I checked Minio was partially compatible due to having locking enabled, but not versioning, or something like that, but it's been a month or two since I saw that. Thanks!

EDIT: Just found it...looks like I was looking in the Veeam Forums. Still curious of anyone has tried to roll their own S3 compatible storage on premise in this fashion. https://forums.veeam.com/object-storage-f52/immutability-with-minio-possible-t64307.html
Looks like Minio is now listed as compatible and is listed in the Veeam Ready list, but it's not listed under Compatible + Immutability, so maybe it's not fully fully compatible with this feature?

submitted by dloseke to Veeam [link] [comments]

SCCM WDS/PXE troubleshooting

hello All ,i'm going to strenghten my knowledge (or not) on WDS/PXE Feature on SCCM v2006.
History :
Configuration :
DHCP options 66+67 on DHCP Instance, I know Microsoft doesn't recommand that design for subnet/vlan reaching but who know i don't wanna explain to network engineer what i intent to implement ... waster of time , anyway until now work like a charm .
EG :
Was implemented on release before v1901 about one year ago (can't recall exactly)
Was updated on v2002 : after that ... broken
Was forced to use Norton ghost as last resort for past two weeks (and it was painfull to .. wait) .. thanks for guyz on reddit (for SCCM client rearm :) )

Problem/Issues related (Testing VM machine) where firewall turned off on OS supervisor .. :

Booting on PXE
After That .
Logs from SMSPXE.log after last attempt , i read someone Cert chain no validated can be culprit , on log seems ok-ish
I recalled first time i've that worked without the pain , this time wont try to find proper boot file .. (:X) .
If anyone faced same problem ? because now i can say "what the fonk seriously" :D.
With the power of fonk , i going to use IP-Helpers .. not sure if will help ..
And finally logs (SMSPXE.log)
InstallBootFilesForImage failed. 0x80004005 SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Warning: Failed to copy the needed boot binaries from the boot image D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim.
The operation completed successfully. (Error: 00000000; Source: Windows) SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Failed adding image D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim. Will Retry..
Unspecified error (Error: 80004005; Source: Windows) SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
File D:\RemoteInstall\SMSTemp\2020.{84E47845-E5AB-460E-93B5-9B7F08051E40}.boot.bcd deleted. SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
File D:\RemoteInstall\SMSTemp\2020.{84E47845-E5AB-460E-93B5-9B7F08051E40}.boot.bcd.log deleted. SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Found new image 034000AD SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Loaded Windows Imaging API DLL (version '10.0.19041.1') from location 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\DISM\wimgapi.dll' SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Opening image file D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
Found Image file: D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim
PackageID: 034000AD
ProductName: Microsoft® Windows® Operating System
Architecture: 0
Description: Microsoft Windows PE (x86)
SystemDir: WINDOWS
`SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)` 
Closing image file D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim SMSPXE 9/14/2020 4:37:15 PM 13052 (0x32FC)
InstallBootFilesForImage failed. 0x80004005 SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
Warning: Failed to copy the needed boot binaries from the boot image D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim.
The operation completed successfully. (Error: 00000000; Source: Windows) SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
Failed adding image D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim. Will Retry..
Unspecified error (Error: 80004005; Source: Windows) SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
File D:\RemoteInstall\SMSTemp\2020.{75724072-C04C-4BF2-B834-A47B23D7FDF7}.boot.bcd deleted. SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
File D:\RemoteInstall\SMSTemp\2020.{75724072-C04C-4BF2-B834-A47B23D7FDF7}.boot.bcd.log deleted. SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
Begin validation of Certificate [Thumbprint 6EF26DCEC790BA64CFD3519502FA750109D455E1] issued to '335523aa-73c7-4343-9fb3-10dea91b547d' SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
Completed validation of Certificate [Thumbprint 6EF26DCEC790BA64CFD3519502FA750109D455E1] issued to '335523aa-73c7-4343-9fb3-10dea91b547d' SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
PXE Provider finished loading. SMSPXE 9/14/2020 4:37:16 PM 13052 (0x32FC)
Error opening file: D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim. Win32=32 SMSPXE 9/14/2020 4:37:18 PM 5632 (0x1600)
Retrying D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim SMSPXE 9/14/2020 4:37:18 PM 5632 (0x1600)
Found new image 034000AC SMSPXE 9/14/2020 4:37:21 PM 5632 (0x1600)
Loaded Windows Imaging API DLL (version '10.0.19041.1') from location 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\DISM\wimgapi.dll' SMSPXE 9/14/2020 4:37:21 PM 5632 (0x1600)
Opening image file D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim SMSPXE 9/14/2020 4:37:21 PM 5632 (0x1600)
Found Image file: D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim
PackageID: 034000AC
ProductName: Microsoft® Windows® Operating System
Architecture: 9
Description: Microsoft Windows PE (x64)
SystemDir: WINDOWS
`SMSPXE 9/14/2020 4:37:21 PM 5632 (0x1600)` 
Closing image file D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim SMSPXE 9/14/2020 4:37:21 PM 5632 (0x1600)
Found new image 034000AD SMSPXE 9/14/2020 4:37:23 PM 5632 (0x1600)
Loaded Windows Imaging API DLL (version '10.0.19041.1') from location 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\DISM\wimgapi.dll' SMSPXE 9/14/2020 4:37:23 PM 5632 (0x1600)
Opening image file D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim SMSPXE 9/14/2020 4:37:23 PM 5632 (0x1600)
Found Image file: D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim
PackageID: 034000AD
ProductName: Microsoft® Windows® Operating System
Architecture: 0
Description: Microsoft Windows PE (x86)
SystemDir: WINDOWS
`SMSPXE 9/14/2020 4:37:23 PM 5632 (0x1600)` 
Closing image file D:\RemoteInstall\SMSImages\034000AD\WinPE.034000AD.wim SMSPXE 9/14/2020 4:37:23 PM 5632 (0x1600)
Boot image 034000AC has changed since added SMSPXE 9/14/2020 4:37:28 PM 5632 (0x1600)
Loaded Windows Imaging API DLL (version '10.0.19041.1') from location 'C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\DISM\wimgapi.dll' SMSPXE 9/14/2020 4:37:28 PM 5632 (0x1600)
Opening image file D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim SMSPXE 9/14/2020 4:37:28 PM 5632 (0x1600)
Found Image file: D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim
PackageID: 034000AC
ProductName: Microsoft® Windows® Operating System
Architecture: 9
Description: Microsoft Windows PE (x64)
SystemDir: WINDOWS
`SMSPXE 9/14/2020 4:37:28 PM 5632 (0x1600)` 
Closing image file D:\RemoteInstall\SMSImages\034000AC\WinPE.034000AC.wim SMSPXE 9/14/2020 4:37:28 PM 5632 (0x1600)
============> Received from client: SMSPXE 9/14/2020 4:38:55 PM 5156 (0x1424)
Operation: BootRequest (1) Addr type: 1 Addr Len: 6 Hop Count: 0 ID: 0001E240
Sec Since Boot: 65535 Client IP: Your IP: Server IP: Relay Agent IP:
Addr: 00:15:5d:01:31:08:
Magic Cookie: 63538263
Type=93 Client Arch: EFI BC
Type=97 UUID: 00832d0a20cbcfa24c9774be443d9fdf64
Type=53 Msg Type: 3=Request
Type=60 ClassId: PXEClient
Type=55 Param Request List: 3c8081828384858687
Type=250 0c01000d020800010200070e0100ff SMSPXE 9/14/2020 4:38:55 PM 5156 (0x1424)
Prioritizing local MP http://CSFOREFRONT03.netia-ad.local. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
Not in SSL. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
RequestMPKeyInformation: Send() failed. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
Unsuccessful in getting MP key information. 80004005. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
PXE::MP_InitializeTransport failed; 0x80004005 SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
PXE::MP_LookupDevice failed; 0x80070490 SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
Prioritizing local MP http://CSFOREFRONT03.netia-ad.local. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
Not in SSL. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
RequestMPKeyInformation: Send() failed. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
Unsuccessful in getting MP key information. 80004005. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
PXE::MP_InitializeTransport failed; 0x80004005 SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
PXE::MP_ReportStatus failed; 0x80070490 SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
PXE Provider failed to process message.
Element not found. (Error: 80070490; Source: Windows) SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
00:15:5D:01:31:08, 200A2D83-CFCB-4CA2-9774-BE443D9FDF64: Not serviced. SMSPXE 9/14/2020 4:38:55 PM 9128 (0x23A8)
File D:\RemoteInstall\SMSTemp\2020.{922B1EC8-ECAD-42F6-A001-84177BDD5C13}.boot.bcd deleted. SMSPXE 9/14/2020 4:47:16 PM 3120 (0x0C30)
File D:\RemoteInstall\SMSTemp\2020.{922B1EC8-ECAD-42F6-A001-84177BDD5C13}.boot.bcd.log deleted. SMSPXE 9/14/2020 4:47:16 PM 3120 (0x0C30)
============> Received from client: SMSPXE 9/14/2020 4:53:04 PM 5156 (0x1424)
Operation: BootRequest (1) Addr type: 1 Addr Len: 6 Hop Count: 0 ID: 0001E240
Sec Since Boot: 65535 Client IP: Your IP: Server IP: Relay Agent IP:
Addr: 00:15:5d:01:31:08:
Magic Cookie: 63538263
Type=93 Client Arch: EFI BC
Type=97 UUID: 00832d0a20cbcfa24c9774be443d9fdf64
Type=53 Msg Type: 3=Request
Type=60 ClassId: PXEClient
Type=55 Param Request List: 3c8081828384858687
Type=250 0c01000d020800010200070e0100ff SMSPXE 9/14/2020 4:53:04 PM 5156 (0x1424)
Prioritizing local MP http://CSFOREFRONT03.netia-ad.local. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
Not in SSL. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
RequestMPKeyInformation: Send() failed. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
Unsuccessful in getting MP key information. 80004005. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
PXE::MP_InitializeTransport failed; 0x80004005 SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
PXE::MP_LookupDevice failed; 0x80070490 SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
Prioritizing local MP http://CSFOREFRONT03.netia-ad.local. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
Not in SSL. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
RequestMPKeyInformation: Send() failed. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
Unsuccessful in getting MP key information. 80004005. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
PXE::MP_InitializeTransport failed; 0x80004005 SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
PXE::MP_ReportStatus failed; 0x80070490 SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
PXE Provider failed to process message.
Element not found. (Error: 80070490; Source: Windows) SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
00:15:5D:01:31:08, 200A2D83-CFCB-4CA2-9774-BE443D9FDF64: Not serviced. SMSPXE 9/14/2020 4:53:04 PM 9128 (0x23A8)
submitted by OniSen8 to SCCM [link] [comments]

Working configuration for Xiaomi Mi Desk Lamp 1S (ESP32)

I did a lot of reverse engineering and try and error and finally came up with the following config that works with the Xiaomi Mi Desk Lamp 1S. It offers the same features as the original firmware and can of course be extended to do a lot more than that.
There is still a little hack in the esphome: section to get it running on the single core ESP32, but I‘ve seen in another thread that someone opened an issue to officially support that eventually.
The original thread with the progress behind that "Project" is here.
If someone has inputs to improve the configuration, I'd be happy to hear it.
esphome: name: Mi_Desk_Lamp_1S platform: ESP32 board: esp32doit-devkit-v1 platformio_options: platform: espressif32@1.11.0 platform_packages: |-4 framework-arduinoespressif32 @ https://github.com/pauln/arduino-esp32.git#solo-no-mac-crc/1.0.4 wifi: ssid: 'SSID' password: 'PASSWORD' ota: api: password: 'PASSWORD' logger: sensor: - platform: rotary_encoder id: rotation pin_a: GPIO27 pin_b: GPIO26 resolution: 2 on_value: then: - if: condition: # Check if Button is pressed while rotating lambda: 'return id(button).state;' then: # If Button is pressed, change CW/WW - lambda: |- auto min_temp = id(light1).get_traits().get_min_mireds(); auto max_temp = id(light1).get_traits().get_max_mireds(); auto cur_temp = id(light1).current_values.get_color_temperature(); auto new_temp = max(min_temp, min(max_temp, cur_temp + (x*20))); auto call = id(light1).turn_on(); call.set_color_temperature(new_temp); call.perform(); else: # If Button is not pressed, change brightness - light.dim_relative: id: light1 relative_brightness: !lambda |- return x / 10.0; # Reset Rotation to 0 - sensor.rotary_encoder.set_value: id: rotation value: 0 binary_sensor: - platform: gpio id: button pin: number: GPIO33 inverted: True mode: INPUT_PULLDOWN on_click: then: # use if-condition instead of toggle to set full brightness on turn_on - if: condition: light.is_on: light1 then: - light.turn_off: id: light1 else: - light.turn_on: id: light1 brightness: 100% color_temperature: 2700 K output: - platform: ledc pin: GPIO2 id: output_cw min_power: 0.03 power_supply: power - platform: ledc pin: GPIO4 id: output_ww min_power: 0.03 power_supply: power power_supply: - id: power pin: GPIO12 enable_time: 0s keep_on_time: 0s light: - platform: cwww id: light1 default_transition_length: 0s constant_brightness: true name: "Lights" cold_white: output_cw warm_white: output_ww cold_white_color_temperature: 5000 K warm_white_color_temperature: 2600 K 
submitted by erwinbamert to Esphome [link] [comments]

Auto Blogpost Discord Bot

Auto Blogpost Discord Bot
This was originally a work for just my own server, but I now set it up for anyone to add.
In short - Add this bot to your server. You need admin permission to set it up, type %!help for a quick overview.

Example Blog Post

Feature overview

  • The bot checks the website very frequently. New blog posts should be on all channels within a few seconds.
  • Auto publish new blog posts in Discord's announcement channels (The channels you can follow, only on Community or above servers)
  • Ping options, ping everyone, no one or a specific role
  • Self-hosting (if you really want to): check out binaries or build from source


Command Arguments Info
%!add Add current channel to the notified list
%!remove Remove current channel to the notified list
%!publish on off
%!ping none everyone
%!info Show an overview about all channels registered on this server
%!report Your message Report an issue to the Bot Admin (this will share your user name so they can contact you)
%!help Show a help dialog with all these commands
submitted by wulkanat to HytaleInfo [link] [comments]

Working Intel WiFi + Bluetooth with itlwm

I can't believe I hadn't heard of this sooner! Thanks to u/myusrm for bringing it to my attention.
First, the WiFi.
itlwm is a Intel WiFi driver by zxystd on GitHub. It supports a range of Intel wifi cards.
This is possible because the driver is a port of OpenBSD's Intel driver, and it emulates an ethernet device (no AirDrop and the like with this, unfortunately).
There's a ton of info from zxystd on his Chinese, invite-only PCBeta thread, but it's hard to understand (and impossible to download the binaries), so I'll share what I've worked out:
There are three kexts available. These are all to be injected by the bootloader. The first, `itlwm.kext`, is for most Intel cards (like my 9560); a list is available on the GitHub README. The second, `itlwmx.kext`, is for newer WiFi 6 cards. The final kext is used to configure automatic connections (by editing the Info.plist); it's optional. The Info.plist files in the kexts can be modified with SSIDs and passwords to connect to on boot. I'm not sure what the third, itl80211.kext, is for - but I didn't need it.
There's also an optional app, HeliPort, to configure WiFi settings.
zxystd say they'll release binaries soon, but I've built some myself for those who want some prebuilts now: the kexts, and the app.
EDIT: Here are some newer (less tested) builds.
Now, the Bluetooth:
To get Bluetooth working, you can add the kexts from zxystd's repo to your bootloader. Don't put these in /Library/Extensions, as doing so can cause system instability.

I'm amazed that this exists - I thought it would never be possible to get Intel WiFi working at all. This ethernet method is probably the best we'll get, though, as Apple's WiFi APIs are completely undocumented and hard to work with.
(This works for me on macOS Big Sur 11.0 Beta (20A4299v), with an Intel Wireless 9560 card).

EDIT: Guys, please don't make GitHub issues because you can't work out how to build the binaries.
submitted by superl2 to hackintosh [link] [comments]

Is it possible to upload local file to server without using FormData?

My backend server is written in expressjs. I'd like to upload very large file (~4G) to my server then to minio server.
The problem is minio API only support ReadableStream|Buffer|string, among which ReadableStream is my only option because the other two opions will explode my memory.
The backend code is like the following
app.post('/uploadFile', async (req, res) => { const r = await minioClient.putObject("test", "txt.txt", req) console.log(r) }) 
I hope the whole request body is the binary form of the uploaded file. Or there is a way that I can make a duplex steam and write a small part(64KB) into it then transfer it to minio then write another 64KB, But I don't think I can do that in javascript out of the box...
submitted by yukiiiiii2008 to webdev [link] [comments]

Forth as Firmware, How Does it All Work?

I find Forth to be both interesting and confusing. As I understand it, Forth is just barely more abstract than macro assembly in some ways but also a stack-based VM at the same time. Yet, the code is portable, system resources permitting, as long as it's "clean code" devoid of inline ASM.
So, how does all of this work when applied to firmware that is part of a larger OS stack?
With some gaps, at a high level I'm envisioning something like this:
  1. If you're dealing with anything other than SRAM you initialize RAM.
  2. You dump Forth, written in ASM, into a predetermined location in RAM along with any required Forth code.
  3. You set the CPU instruction pointer to the start of Forth's binary image.
  4. Forth starts and calls its auto-start script which sets about initialization of the system bus, etc.
This is about as far as logical inference will take me. Assuming I'm not completely mistaken about how this all works, how does the system go from here?
I'm hung up on...
This is a lot harder for me to visualize than when Forth is the OS. How popular is Forth in the embedded space these days? What makes it more fit than other popular options? I know Macs and the XO laptop both used Forth based firmware back in the day.
submitted by s-ro_mojosa to Forth [link] [comments]

relay of today's AMA with Lisk Research Scientist Maxime who answered questions about Lisk codec and the improvements it will bring to the Lisk SDK developer experience.

The formatting on this is not the prettiest to look at as it came directly from the Lisk discord. But it is what it is, and I hope it helps inform some of you here on the subreddit who are not present on the discord channel.
maximeToday at 3:01 PM Did you guys get to read the blogpost? Had you read the LIP before?
SüleymanToday at 3:02 PM Hi Maxime
maximeToday at 3:03 PM I think in this case the blogpost is much nicer than the LIP, there are quite a few technicalities that are not so important to understand the big picture.
StellardynamicToday at 3:03 PM Hello Maxime! Thank you for taking your time to do this. Is any work being done on specific token, NFT, and other standardized assets from an HQ perspective?
dav1Today at 3:05 PM How will be the calculation of payload and payoadLength of a genesis block on a sidechian if I decide to have different initialization transactions there? Will it be possible, lisk codec will offer all methods for it?
maximeToday at 3:06 PM So yes, with the SDK, we will have a certain number of modules that are almost good "out of the box" and for which the serialization schema will be provided. This won't limit users to then define other assets if they need to.
Corbifex | MoostyToday at 3:06 PM Do you sign transactions before or after encoding?
maximeToday at 3:08 PM @dav1 If I understood, you ask about the size of the genesis block. With the new serialization, you will provide a schema for all objects that your blockchain uses. This schema is used by Lisk Codec and the result can be measured in bytes.
korben3Today at 3:08 PM @Stellardynamic see a comment from #developers. "Also you don’t need NFT with Lisk. It’s already possible to have an account store an asset, like a weapon. Then transfer ownership to another user. Using a custom tx, you can remove the weapon asset from account A and add it to account B. Another method would be for the weapon to have its own account and then its asset would contain a reference to the current, and perhaps previous, owner"
maximeToday at 3:09 PM You must first encode and the sign the bytes that you received. That is why it is really important that the byte sequence is deterministic and unique.
StellardynamicToday at 3:10 PM I'm not sure the result is quite the same.
korben3Today at 3:11 PM How do you use the JSON scheme in the custom tx? Is it in the validateAsset method and how do you apply it? @maxime
Andrew KimToday at 3:12 PM Defi..
sgdiasToday at 3:12 PM How you'll keep retrocompatibility with previous blocks not using the encoding when this goes live ?
maximeToday at 3:13 PM The JSON schema is here to define the "structure" of the custom asset. So you can have some basic validation there, like: this is a string, or this address must be this long. This does not cover "blockchain logic", so your custom transaction will also need a "validate asset" and "apply asset" function.
whomiToday at 3:14 PM any related plans from defi? @maxime
maximeToday at 3:14 PM As far as I know, the JSON will not have to be "re-used" in the validate or apply.
ziomekkpToday at 3:16 PM Hello maxime! What about interoperability? Any progress?
maximeToday at 3:17 PM @sgdias that is a good question. The basic rule is: "everything before block N is serialized according to the old rules and everything after is serialized according to the new rules". In essence, if you want to recheck old signatures, you will need the old serialization method.
sgdiasToday at 3:18 PM So the old serialization method won't be deprecated, indeed is very much needed for old blocks
maximeToday at 3:18 PM Defi is not really the topic today, sorry.
whomiToday at 3:19 PM ok understand. Thank you @maxime
maximeToday at 3:24 PM There is always progress. :smile: Nothing public yet, though....
gr33ndrag0nToday at 3:24 PM Interesting
sgdiasToday at 3:25 PM Do we have any analog/equivalent of the descentralized PoW SPV proof 2 way peg for dPoS or we only progressed so far till the federated model ?
BenF | French LSK AmbassadorToday at 3:26 PM @maxime how should projects share JSON schemas ? Anything planned ? Any standardised discovery mechanism or API endpoint ?
maximeToday at 3:30 PM @sgdias not super sure about your question, in DPoS you can have an "SPV" type proof by providing signature from the delegates (insert details). The difficulty then boils down to "how can you know the active delegates?". Here you need some kind of "current delegates" field saved somewhere and you update that with another SPV proof. But I understand that this is not exactly the same as POW SPV proofs.
maximeToday at 3:31 PM Sharing the JSON shemas should be done with the rest of the code. To connect to a blockchain, you need a few different things, like "seed nodes", "genesis block" etc.... So you would get the schemas at the same time you get the other information.
sgdiasToday at 3:33 PM @maxime no matter if is not SPV proof (that was just an example), what it matters is you guys reach a descentralized 2 way peg (more descentralized than federation, drivechain or hybrid). The fully descentralized 2 way peg is the holy grial of sidechain interop., are we close to that holy grial ?
BenF | French LSK AmbassadorToday at 3:34 PM @maxime sharing the schema was asked in the context of building chain agnostic tools to allow parsing and building correct transaction for any chain. Maybe we should agree on API based information sharing rather than relying on the information in code ? Did I misunderstand your response ?
@BenF | French LSK Ambassador In that case, I agree that we could have some easy way to get the schemas (maybe an API call) to allow generic wallets to build transactions. In general (and from a research point of view) this is not the most secure. It is always better to be able to understand what the custom transaction will do, and for this you would need the code. But in that regard, you probably your wallet to do some basic checks about the chains it integrates with. In that case, you could integrate all schemas in you wallet when doing those checks.
BenF | French LSK AmbassadorToday at 3:41 PM Agree this could be subject to man in the middle, etc. Maybe a public registry chain for schemas or something. Will put this down as TBD, work in progress, great work. Thanks
sgdiasToday at 3:57 PM @maxime how the codec/serialization works with multisignatures ?
maximeToday at 4:01 PM In terms of sending a transaction from a multisig account: the signature property of a transaction is now an array of signatures and they are serialized in the given order. If you have optional signatures, non-present signatures are serialized as 0. So the signature on a 2 out of 3 account will look like [sig1, 0, sig3].
jongToday at 4:07 PM @maxime I don't fully understand the benefits of this codec change. In the past, Lisk had used binary serialization for blocks and found that it did not affect the size of blocks at all. See https://github.com/LiskHQ/lisk-sdk/issues/1621#issuecomment-369325568
Another counterpoint is that protobuf messages after compression are typically only about 9% smaller in size compared to JSON (which is not a significant difference).
A supporting point can be made that protobuf is faster than JSON at encoding/decoding - But based on my understanding of the Lisk code, the serialization/de-serialization represents a very small portion of the total workload of a Lisk node (it's not a bottleneck) so I don't fully understand the benefits of this codec change.
Are there other benefits aside from size (bandwidth usage) and encoding/decoding performance? Does some other LIP depend on it?
maximeToday at 4:19 PM Hi @jong , thanks for your question. First off, yes, the encoding time represents a rather small part of the whole processing and it it not the main motivation to use Lisk Codec. The need for the codec is three fold (you have a more detailed explanation in the blogpost and the LIP): - uniqueness. If we rely on other tools to generate the binaries and that the output for those is afected by anything (for example the node version). Then you might create different binaries and reject signatures that others might accept. This would be a "worst case scenario". - size: it makes a significant difference (much larger than 9%). If you look for example at the signature in a JSON version of a transaction, it is currently displayed as a hexadecimal string of length 128. With Lisk Codec this goes down to 64 bytes. We could have changed that by displaying signatures in JSON in another format than hexadecimal, but this is not as "human friendly" and would have needed a change anyways (so might as well reap the other benefits of Lisk Codec). - you can put all the burden of writing a getBytes function on the shoulders of the SDK user, but this would go against the idea to make the SDK as easy to use as possible.
maximeToday at 4:30 PM I'm going to be off in a few minutes, thanks for your question and your support ! For announcements, keep an eye on our monthly achievements blogpost and our other social media. :lisk: Good evening everyone!
submitted by John_Muck to Lisk [link] [comments]

How to make API TOKEN and make automatic trading using AUTO TRADER WEB Binary.com API bridge How to create an API Token in binary.com - YouTube How to Creat Binary com API Token How to setup API Token for Binary.com Auto Trader Web By Binary Ex Machina

trading trading-bot trading-api trading-strategies trading-algorithms forex-trading forex-prediction trading-systems forexconnect-api binary-options Updated Sep 25, 2019 C++ API guide App registration. Before using the API, you must register your application: Open an account at Binary.com (either a Virtual Account or a Real Account).; Go to Security & Limits, select API token and create a new token with the admin scope.; Register your app to obtain your app_id.; Client authentication. Certain API calls require client authentication (e.g. portfolio) whilst other ... Difference Between Binary and Vanilla Options . A vanilla American option gives the holder the right to buy or sell an underlying asset at a specified price before the expiration date of the option. Binary.com is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some ... Binary.com is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some ...

[index] [3689] [2561] [2208] [1022] [1357] [3230] [4537] [2714] [2392] [1584]

How to make API TOKEN and make automatic trading using AUTO TRADER WEB

binary option, forex, binary robot binary bot api. Trade and earn profit from your system without binary.com bot. http://binaryoptionscopytrading.club/api-token/ How to create an API Token in binary.com Creating an API Token in binary.com is the same for both a demo acco... Hi, in this video we explain how it is possible to automate binary options using MT4 and send signals to almost any broker in the world. Therefore we use BinaryConnect and its API. To know more ... Make 10 usd Every 50 Seconds Trading Binary Options 100% WINS - Profitable 2018 Trading strategies - Duration: 5:58. Proudly Tech Money General Tips And Tricks 45,339 views Binary Options Doctor Binary Options Strategy & Trading Systems 2,881 views. 14:25. ... How to create an API Token in binary.com - Duration: 1:34. BOCopy Trading Club 694 views.