stop process migration

Satish Patel satish.txt@REDACTED
Mon Mar 2 21:28:40 CET 2020


This is what i did to make it work, its heck but seems working, I have
two numa node and each has 16vCPU so i am running two erlang process
on two different ports and each binding to each NUMA like following

erlang-mongooseIM-1 CPU bind with NUMA0 (16 vCPU)
erlang-mongooseIM-2 CPU bind with NUMA1 (16 vCPU)

Its very ugly but seems working and i can get more performance out of it.

On Mon, Feb 24, 2020 at 10:14 AM Satish Patel <satish.txt@REDACTED> wrote:
>
> On hardware i am getting really good performance result but not on
> openstack vm which has same compute hardware, we want vm to give us
> same level of performance which we are getting from hardware (atl east
> 90% not 100%)
>
> So i build single large VM (32vCPU) on kvm host machine which has (40
> CPU core in HT), i did all best practices to get good performance from
> kvm like cpu pinning/hugepage/sriov network etc, but when i run
> application i am getting (50% benchmark result in short performance is
> worst) so i have tried to use erlang CPU pinning to bind schedule with
> NUMA0 (so erlang only going to use 16vCPU core located on NUMA0) in
> that case i got very good benchmark result (70% result) but in this
> case i am loosing NUMA1 cpu cores :(
>
> so trying to understand how do i tell erlang when you see two NUMA
> zone then please keep your process in that zone and don't migrate from
> NUMA0 --> NUMA1 which causing high latency to access memory and
> performance getting worst.
>
> I am surprised no one noticed this kind of behavior before.
>
> On Sun, Feb 23, 2020 at 12:20 PM Jesper Louis Andersen
> <jesper.louis.andersen@REDACTED> wrote:
> >
> > This looks far better from my end.
> >
> > I was going to ask what you would like to use your pinning for, but I now see you needed it one level up from where I thought you needed it.
> >
> > On Sun, Feb 23, 2020 at 5:59 PM Satish Patel <satish.txt@REDACTED> wrote:
> >>
> >> Karl,
> >>
> >> This is what currently we are doing to pin down CPU to specific NUMA
> >> to grain performance, i have 32 core machine and i pin down erlang to
> >> only stay on numa1 (because when i pin down on both numa then process
> >> migration happen and it hurt performance between cross NUMA)
> >>
> >> /usr/mongooseim/erts-7.3/bin/beam.smp -K true -A 5 -P 10000000 -Q
> >> 1000000 -e 100000 -sct
> >> L16t0c0p0n1:L18t0c1p0n1:L20t0c2p0n1:L22t0c3p0n1:L24t0c4p0n1:L26t0c5p0n1:L28t0c6p0n1:L30t0c7p0n1:L17t1c0p0n1:L19t1c1p0n1:L21t1c2p0n1:L23t1c3p0n1:L25t1c4p0n1:L27t1c5p0n1:L29t1c6p0n1:L31t1c7p0n1
> >> -sbt nnts -S 16 -sbt nnts -- -root /usr/mongooseim -progname
> >> mongooseim -- -home /usr/mongooseim -- -boot
> >> /usr/mongooseim/releases/2.1.0beta1/mongooseim -embedded -config
> >> /usr/mongooseim/etc/app.config
> >>
> >> On Fri, Feb 21, 2020 at 2:58 PM Karl Velicka <karolis.velicka@REDACTED> wrote:
> >> >
> >> > Hi,
> >> >
> >> > (I'm reading your question as "how to pin an Erlang process to a CPU _core_)
> >> >
> >> > There is an option to erlang:spawn_opt - {scheduler, SchedNum} to pin a process to a specific scheduler in the VM. However, this option is undocumented so it's probably exposed on "caveat emptor" basis. We get the possible scheduler numbers by doing lists:seq(1, erlang:system_info(schedulers_online)) and we've been using the flag in OTP versions 20-22.
> >> >
> >> > The scheduler itself is just a thread from the perspective of the OS so I assume it shouldn't be difficult to pin to a core.
> >> >
> >> > All the best,
> >> > Karl
> >> >
> >> > On Thu, 20 Feb 2020 at 17:05, Satish Patel <satish.txt@REDACTED> wrote:
> >> >>
> >> >> Folks,
> >> >>
> >> >> Can i tell erlang to not load-balance process or migrate process to
> >> >> different CPU?
> >
> >
> >
> > --
> > J.


More information about the erlang-questions mailing list