What replaces x86 intrinsics for C when Apple ditches Intel CPUs for their own chips?Why does Apple Activity Monitor report that my Mac with a dual-core Intel i5 Ivy Bridge CPU has 4 cores?Two older Macs (G3 / Intel) need their drives erased. No install discsIs there a way to use the Intel GPU when running Windows 7 from Bootcamp?Does Apple design its own CPUs?Amongst Macs that can be upgraded to support OS X Recovery, can any model *not* accept a firmware downgrade?High CPU temperatue on Macbook Pro Retina Mid 2014How to find Intel SKU for Mac CPUsWhich graphics cards or integrated GPUs support metal?What are normal - or at least not harmful - temperatures for various components of my MacBook Pro?Would I be able to use an eGPU on unsupported Macs with an AMD Pascal graphics card?
Does science define life as "beginning at conception"?
Which values for voltage divider
How many wires should be in a new thermostat cable?
Surface of the 3x3x3 cube as a graph
Are there historical examples of audiences drawn to a work that was "so bad it's good"?
Why "strap-on" boosters, and how do other people say it?
How to make Flex Markers appear in Logic Pro X?
Can someone get a spouse off a deed that never lived together and was incarcerated?
DeleteCases using two lists but with partial match?
Find this Unique UVC Palindrome ( ignoring signs and decimal) from Given Fractional Relationship
Writing "hahaha" versus describing the laugh
Meaning of "half-crown enclosure"
Why the work done is positive when bringing 2 opposite charges together?
Is it safe to redirect stdout and stderr to the same file without file descriptor copies?
Does the fact that we can only measure the two-way speed of light undermine the axiom of invariance?
How to test if argument is a single space?
Computing elements of a 1000 x 60 matrix exhausts RAM
Are there any tips to help hummingbirds find a new feeder?
Negative impact of having the launch pad away from the Equator
Is there a word for pant sleeves?
Can diplomats be allowed on the flight deck of a commercial European airline?
why "American-born", not "America-born"?
Adobe Illustrator: How can I change the profile of a dashed stroke?
csname in newenviroment
What replaces x86 intrinsics for C when Apple ditches Intel CPUs for their own chips?
Why does Apple Activity Monitor report that my Mac with a dual-core Intel i5 Ivy Bridge CPU has 4 cores?Two older Macs (G3 / Intel) need their drives erased. No install discsIs there a way to use the Intel GPU when running Windows 7 from Bootcamp?Does Apple design its own CPUs?Amongst Macs that can be upgraded to support OS X Recovery, can any model *not* accept a firmware downgrade?High CPU temperatue on Macbook Pro Retina Mid 2014How to find Intel SKU for Mac CPUsWhich graphics cards or integrated GPUs support metal?What are normal - or at least not harmful - temperatures for various components of my MacBook Pro?Would I be able to use an eGPU on unsupported Macs with an AMD Pascal graphics card?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
There are whole industries founded on the use of Intel Intrinsics for CPU parallelisation (with SIMD). For example, the community of Lattice QCD physicists depend on that for boost in the efficiency of lattice simulations.
Intel-based macs can be and are routinely used by such professionals to do their job. However, there are rumours about Apple replacing Intel CPU with ARM cpus in future Macs. Will these professionals have to replace Macs with other Intel-based computers, or are there alternatives to Intel Intrinsics for C that are supported on ARM-based CPUs?
mac hardware development cpu intel
add a comment |
There are whole industries founded on the use of Intel Intrinsics for CPU parallelisation (with SIMD). For example, the community of Lattice QCD physicists depend on that for boost in the efficiency of lattice simulations.
Intel-based macs can be and are routinely used by such professionals to do their job. However, there are rumours about Apple replacing Intel CPU with ARM cpus in future Macs. Will these professionals have to replace Macs with other Intel-based computers, or are there alternatives to Intel Intrinsics for C that are supported on ARM-based CPUs?
mac hardware development cpu intel
1
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23
add a comment |
There are whole industries founded on the use of Intel Intrinsics for CPU parallelisation (with SIMD). For example, the community of Lattice QCD physicists depend on that for boost in the efficiency of lattice simulations.
Intel-based macs can be and are routinely used by such professionals to do their job. However, there are rumours about Apple replacing Intel CPU with ARM cpus in future Macs. Will these professionals have to replace Macs with other Intel-based computers, or are there alternatives to Intel Intrinsics for C that are supported on ARM-based CPUs?
mac hardware development cpu intel
There are whole industries founded on the use of Intel Intrinsics for CPU parallelisation (with SIMD). For example, the community of Lattice QCD physicists depend on that for boost in the efficiency of lattice simulations.
Intel-based macs can be and are routinely used by such professionals to do their job. However, there are rumours about Apple replacing Intel CPU with ARM cpus in future Macs. Will these professionals have to replace Macs with other Intel-based computers, or are there alternatives to Intel Intrinsics for C that are supported on ARM-based CPUs?
mac hardware development cpu intel
mac hardware development cpu intel
edited May 8 at 12:46
jksoegaard
21.8k12552
21.8k12552
asked May 8 at 12:38
Nanashi No GombeNanashi No Gombe
20518
20518
1
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23
add a comment |
1
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23
1
1
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23
add a comment |
1 Answer
1
active
oldest
votes
Intel Intrinsics are really just a library that provides easier access to a number of Intel instructions sets - such as SSE (Streaming SIMD Extensions), AVX, etc. - for C programmers. The goal is to be able to utilise these instruction sets for parallelisation, etc. without having to do low-level assembly programming by hand.
The ARM platform has similar instruction sets that serve many of the same purposes. For example NEON is the ARM alternative to SSE on Intel. NEON gives you SIMD instructions that you can leverage to increase parallelisation.
And similar to the Intel Intrinsics, you have the ARM Compiler Intrinsics, that serves the same purpose. You can include "arm_neon.h" in your C program to be able to use NEON instructions with a C interface without having to resort to low-level assembly programming.
It is worth noting however, that the instructions available on Intel and ARM are not identical. So similar to "ordinary programs", you cannot use SIMD instructions for Intel on ARM (or vice versa) directly. In practice, software programmers often use software libraries with ready-made higher level operations that are able to take advantage of both Intel instructions as well as ARM instructions. A good example is the "Simd" image processing library (https://github.com/ermig1979/Simd) which offers high level operations that have seperate, optimized implementations for SSE, AVX, VMX, VSX and NEON (i.e. Intel, PowerPC and ARM).
As far as I can see, the growth in new parallisation features is very high on both Intel and ARM platforms - it is essential to providing next generation performance for some users. On newer ARM chips you have access to for example the SVE instruction set (Scalable Vector Extensions, which is essentially an even better SIMD instruction set for 64-bit ARM processors). There's no inherent advantage to either the Intel or ARM platforms in terms of providing new and enhanced SIMD instruction sets for programmers in the future.
Apple's own processors (in for example iPhones and iPads) have had the NEON instruction set for many years. The A5 CPUs and later also have the Advanced NEON set. The newer A11 CPUs have the SVE instructions, and the very latest A12 CPUs add SIMD support for complex numbers on top of that.
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Intel Intrinsics are really just a library that provides easier access to a number of Intel instructions sets - such as SSE (Streaming SIMD Extensions), AVX, etc. - for C programmers. The goal is to be able to utilise these instruction sets for parallelisation, etc. without having to do low-level assembly programming by hand.
The ARM platform has similar instruction sets that serve many of the same purposes. For example NEON is the ARM alternative to SSE on Intel. NEON gives you SIMD instructions that you can leverage to increase parallelisation.
And similar to the Intel Intrinsics, you have the ARM Compiler Intrinsics, that serves the same purpose. You can include "arm_neon.h" in your C program to be able to use NEON instructions with a C interface without having to resort to low-level assembly programming.
It is worth noting however, that the instructions available on Intel and ARM are not identical. So similar to "ordinary programs", you cannot use SIMD instructions for Intel on ARM (or vice versa) directly. In practice, software programmers often use software libraries with ready-made higher level operations that are able to take advantage of both Intel instructions as well as ARM instructions. A good example is the "Simd" image processing library (https://github.com/ermig1979/Simd) which offers high level operations that have seperate, optimized implementations for SSE, AVX, VMX, VSX and NEON (i.e. Intel, PowerPC and ARM).
As far as I can see, the growth in new parallisation features is very high on both Intel and ARM platforms - it is essential to providing next generation performance for some users. On newer ARM chips you have access to for example the SVE instruction set (Scalable Vector Extensions, which is essentially an even better SIMD instruction set for 64-bit ARM processors). There's no inherent advantage to either the Intel or ARM platforms in terms of providing new and enhanced SIMD instruction sets for programmers in the future.
Apple's own processors (in for example iPhones and iPads) have had the NEON instruction set for many years. The A5 CPUs and later also have the Advanced NEON set. The newer A11 CPUs have the SVE instructions, and the very latest A12 CPUs add SIMD support for complex numbers on top of that.
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
add a comment |
Intel Intrinsics are really just a library that provides easier access to a number of Intel instructions sets - such as SSE (Streaming SIMD Extensions), AVX, etc. - for C programmers. The goal is to be able to utilise these instruction sets for parallelisation, etc. without having to do low-level assembly programming by hand.
The ARM platform has similar instruction sets that serve many of the same purposes. For example NEON is the ARM alternative to SSE on Intel. NEON gives you SIMD instructions that you can leverage to increase parallelisation.
And similar to the Intel Intrinsics, you have the ARM Compiler Intrinsics, that serves the same purpose. You can include "arm_neon.h" in your C program to be able to use NEON instructions with a C interface without having to resort to low-level assembly programming.
It is worth noting however, that the instructions available on Intel and ARM are not identical. So similar to "ordinary programs", you cannot use SIMD instructions for Intel on ARM (or vice versa) directly. In practice, software programmers often use software libraries with ready-made higher level operations that are able to take advantage of both Intel instructions as well as ARM instructions. A good example is the "Simd" image processing library (https://github.com/ermig1979/Simd) which offers high level operations that have seperate, optimized implementations for SSE, AVX, VMX, VSX and NEON (i.e. Intel, PowerPC and ARM).
As far as I can see, the growth in new parallisation features is very high on both Intel and ARM platforms - it is essential to providing next generation performance for some users. On newer ARM chips you have access to for example the SVE instruction set (Scalable Vector Extensions, which is essentially an even better SIMD instruction set for 64-bit ARM processors). There's no inherent advantage to either the Intel or ARM platforms in terms of providing new and enhanced SIMD instruction sets for programmers in the future.
Apple's own processors (in for example iPhones and iPads) have had the NEON instruction set for many years. The A5 CPUs and later also have the Advanced NEON set. The newer A11 CPUs have the SVE instructions, and the very latest A12 CPUs add SIMD support for complex numbers on top of that.
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
add a comment |
Intel Intrinsics are really just a library that provides easier access to a number of Intel instructions sets - such as SSE (Streaming SIMD Extensions), AVX, etc. - for C programmers. The goal is to be able to utilise these instruction sets for parallelisation, etc. without having to do low-level assembly programming by hand.
The ARM platform has similar instruction sets that serve many of the same purposes. For example NEON is the ARM alternative to SSE on Intel. NEON gives you SIMD instructions that you can leverage to increase parallelisation.
And similar to the Intel Intrinsics, you have the ARM Compiler Intrinsics, that serves the same purpose. You can include "arm_neon.h" in your C program to be able to use NEON instructions with a C interface without having to resort to low-level assembly programming.
It is worth noting however, that the instructions available on Intel and ARM are not identical. So similar to "ordinary programs", you cannot use SIMD instructions for Intel on ARM (or vice versa) directly. In practice, software programmers often use software libraries with ready-made higher level operations that are able to take advantage of both Intel instructions as well as ARM instructions. A good example is the "Simd" image processing library (https://github.com/ermig1979/Simd) which offers high level operations that have seperate, optimized implementations for SSE, AVX, VMX, VSX and NEON (i.e. Intel, PowerPC and ARM).
As far as I can see, the growth in new parallisation features is very high on both Intel and ARM platforms - it is essential to providing next generation performance for some users. On newer ARM chips you have access to for example the SVE instruction set (Scalable Vector Extensions, which is essentially an even better SIMD instruction set for 64-bit ARM processors). There's no inherent advantage to either the Intel or ARM platforms in terms of providing new and enhanced SIMD instruction sets for programmers in the future.
Apple's own processors (in for example iPhones and iPads) have had the NEON instruction set for many years. The A5 CPUs and later also have the Advanced NEON set. The newer A11 CPUs have the SVE instructions, and the very latest A12 CPUs add SIMD support for complex numbers on top of that.
Intel Intrinsics are really just a library that provides easier access to a number of Intel instructions sets - such as SSE (Streaming SIMD Extensions), AVX, etc. - for C programmers. The goal is to be able to utilise these instruction sets for parallelisation, etc. without having to do low-level assembly programming by hand.
The ARM platform has similar instruction sets that serve many of the same purposes. For example NEON is the ARM alternative to SSE on Intel. NEON gives you SIMD instructions that you can leverage to increase parallelisation.
And similar to the Intel Intrinsics, you have the ARM Compiler Intrinsics, that serves the same purpose. You can include "arm_neon.h" in your C program to be able to use NEON instructions with a C interface without having to resort to low-level assembly programming.
It is worth noting however, that the instructions available on Intel and ARM are not identical. So similar to "ordinary programs", you cannot use SIMD instructions for Intel on ARM (or vice versa) directly. In practice, software programmers often use software libraries with ready-made higher level operations that are able to take advantage of both Intel instructions as well as ARM instructions. A good example is the "Simd" image processing library (https://github.com/ermig1979/Simd) which offers high level operations that have seperate, optimized implementations for SSE, AVX, VMX, VSX and NEON (i.e. Intel, PowerPC and ARM).
As far as I can see, the growth in new parallisation features is very high on both Intel and ARM platforms - it is essential to providing next generation performance for some users. On newer ARM chips you have access to for example the SVE instruction set (Scalable Vector Extensions, which is essentially an even better SIMD instruction set for 64-bit ARM processors). There's no inherent advantage to either the Intel or ARM platforms in terms of providing new and enhanced SIMD instruction sets for programmers in the future.
Apple's own processors (in for example iPhones and iPads) have had the NEON instruction set for many years. The A5 CPUs and later also have the Advanced NEON set. The newer A11 CPUs have the SVE instructions, and the very latest A12 CPUs add SIMD support for complex numbers on top of that.
edited May 9 at 8:33
answered May 8 at 12:57
jksoegaardjksoegaard
21.8k12552
21.8k12552
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
add a comment |
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
3
3
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Worth pointing out that ARM SIMD intrinsics aren't source-compatible with SSE/AVX/AVX512 intrinsics at all. There are some cross-platform SIMD wrapper libraries that try to portably expose the simpler operations like vertical add/sub/fma, but different asm instruction sets provide different shuffles and other operations. Porting x86 intrinsics to AArch64 is not always straightforward, and is a lot of work even when it's simple. But probably most scientists are using libraries that already have x86 and ARM SIMD code, instead of actually using intrinsics directly.
– Peter Cordes
May 9 at 0:51
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, they're not source-compatible - that's why I wrote that they are similar in purpose, but they're definitely not "identical". It's really hard to try to make them cross-platform, as in doing so you would most likely loose the performance gain, you were trying to achieve in the first place. And yes, lots of software libraries already exist with SIMD code for ARM CPUs - including for standard math like linear algebra, but also specific purpose for machine learning, computer vision, etc.
– jksoegaard
May 9 at 8:26
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Yes, exactly. Since this is an Apple question, not Stack Overflow, I thought it was important to say that more clearly and explicitly for people that don't themselves write code using intrinsics who might miss that phrase. It's a pretty obvious point to people who understand how C compiles to asm and runs on CPUs, but then they wouldn't be asking the question. (Nice update, exactly the kind of thing I was thinking)
– Peter Cordes
May 9 at 8:34
Thanks :-) .....
– jksoegaard
May 9 at 8:36
Thanks :-) .....
– jksoegaard
May 9 at 8:36
add a comment |
1
Comments are not for extended discussion; this conversation has been moved to chat.
– bmike♦
May 8 at 19:23